Search Results

Search found 85366 results on 3415 pages for 'source file'.

Page 158/3415 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • File exists but is unreadable by PHP

    - by Aron
    More than once I have ran into this issue: I have a cache file that is automatically generated by PHP. It contains some generated PHP code. However for some reason the file cannot be read and parsed by PHP. These are the symptoms: File actually exists on file system. Using Terminal you can navigate to the file, view its contents (which are fully intact), etcetc. PHP file_exists() will report that the file exists...which is correct since it does :) Then I include() the file. But when actually parsing the file, PHP will just consider it an empty file. No fatal error, just no PHP code actually executed. Again, its as if the file was completely empty (which I assure you, it is not)... It is not a permissions issue. Permissions are set as needed. Workaround: open the file in Terminal via 'nano' or some other text editor and just save it to the disk again. After that (despite no changes to the content) PHP will run it just fine... As a clarification, I'd like to add that this happens rarely, but frequently enough to be a problem. And even when it does, there are hundreds of other similar files on the same system that work without a problem... If this were an issue affecting only my own scripts, I would consider that there must be a bug in the way I generate the PHP code. But no, the issue has occurred more than once when deploying to a server (usually from Beanstalk repository via FTP). The issue has been present on various servers, Debian and Ubuntu running Zend Community Server. Any ideas? One that crossed my mind was opcode cache-ing (part of Zend Server CE)...could it be that an empty version of the file is cached if it is requested while the write operation is still in progress?

    Read the article

  • Making a sldprt to PDB file converter?

    - by user122083
    I wanted to create a parser that can read a solidworks file and turn it into a protein data bank file. This has already been done in a program called DiamondCAD. http://www.zyvex.com/Research/DiamondCAD.html I waant to make a parser that can parse the data and then visualize it the same way as DiamondCAD. I have downloaded and opened solidworks files before and they make no sense to me with never before seen symbols and looks like ancient writing. Does anyone know how a sldprt. file is structured and how it can be parsed into a PDB file? (A software called VMD converts a PDB to Obj. file so it is proof of concept)

    Read the article

  • Run two shell file with thread

    - by user1149157
    How i can run two file shell in parallel and do not shared the same jvm. may be i use thread but how i run two file shell bu two thread ? File 1: #!/bin/bash # # Script for running several experimentations one the same JVM # Usage : TRACE_DIR NB_EXPE Factories... # param="parameter1" another="parameter2" for ((i = 10; i >= 0; i -= 1)) do echo "run my file with param another " done File 2 : #!/bin/bash # # Script for running several experimentations one the same JVM # Usage : TRACE_DIR NB_EXPE Factories... # a="101" b="400" c="500" echo "run my programme with a b c "

    Read the article

  • Dojo extending dojo.dnd.Source, move not happening. Ideas?

    - by Soulhuntre
    Hey all, Ok... I have a simple dojo page with the bare essentials. Three UL's with some LI's in them. The idea si to allow drag-n-drop among them but if any UL goes empty due to the last item being dragged out, I will put up a message to the user to gie them some instructions. In order to do that, I wanted to extend the dojo.dnd.Source dijit and add some intelligence. It seemed easy enough. To keep things simple (I am loading Dojo from a CDN) I am simply declating my extension as opposed to doing full on module load. The declaration function is here... function declare_mockupSmartDndUl(){ dojo.require("dojo.dnd.Source"); dojo.provide("mockup.SmartDndUl"); dojo.declare("mockup.SmartDndUl", dojo.dnd.Source, { markupFactory: function(params, node){ //params._skipStartup = true; return new mockup.SmartDndUl(node, params); }, onDndDrop: function(source, nodes, copy){ console.debug('onDndDrop!'); if(this == source){ // reordering items console.debug('moving items from us'); // DO SOMETHING HERE }else{ // moving items to us console.debug('moving items to us'); // DO SOMETHING HERE } console.debug('this = ' + this ); console.debug('source = ' + source ); console.debug('nodes = ' + nodes); console.debug('copy = ' + copy); return dojo.dnd.Source.prototype.onDndDrop.call(this, source, nodes, copy); } }); } I have a init function to use this to decorate the lists... dojo.addOnLoad(function(){ declare_mockupSmartDndUl(); if(dojo.byId('list1')){ //new mockup.SmartDndUl(dojo.byId('list1')); new dojo.dnd.Source(dojo.byId('list1')); } if(dojo.byId('list2')){ new mockup.SmartDndUl(dojo.byId('list2')); //new dojo.dnd.Source(dojo.byId('list2')); } if(dojo.byId('list3')){ new mockup.SmartDndUl(dojo.byId('list3')); //new dojo.dnd.Source(dojo.byId('list3')); } }); It is fine as far as it goes, you will notice I left "list1" as a standard dojo dnd source for testing. The problem is this - list1 will happily accept items from lists 2 & 3 who will move or copy as apprriate. However lists 2 & 3 refuce to accept items from list1. It is as if the DND operation is being cancelled, but the debugger does show the dojo.dnd.Source.prototype.onDndDrop.call happening, and the paramaters do look ok to me. Now, the documentation here is really weak, so the example I took some of this from may be way out of date (I am using 1.4). Can anyone fill me in on what might be the issue with my extension dijit? Thanks!

    Read the article

  • View Generated Source (After AJAX/JavaScript) in C#

    - by Michael La Voie
    Is there a way to view the generated source of a web page (the code after all AJAX calls and JavaScript DOM manipulations have taken place) from a C# application without opening up a browser from the code? Viewing the initial page using a WebRequest or WebClient object works ok, but if the page makes extensive use of JavaScript to alter the DOM on page load, then these don't provide an accurate picture of the page. I have tried using Selenium and Watin UI testing frameworks and they work perfectly, supplying the generated source as it appears after all JavaScript manipulations are completed. Unfortunately, they do this by opening up an actual web browser, which is very slow. I've implemented a selenium server which offloads this work to another machine, but there is still a substantial delay. Is there a .Net library that will load and parse a page (like a browser) and spit out the generated code? Clearly, Google and Yahoo aren't opening up browsers for every page they want to spider (of course they may have more resources than me...). Is there such a library or am I out of luck unless I'm willing to dissect the source code of an open source browser? SOLUTION Well, thank you everyone for you're help. I have a working solution that is about 10X faster then Selenium. Woo! Thanks to this old article from beansoftware I was able to use the System.Windows.Forms.WebBrwoswer control to download the page and parse it, then give em the generated source. Even though the control is in Windows.Forms, you can still run it from Asp.Net (which is what I'm doing), just remember to add System.Window.Forms to your project references. There are two notable things about the code. First, the WebBrowser control is called in a new thread. This is because it must run on a single threaded apartment. Second, the GeneratedSource variable is set in two places. This is not due to an intelligent design decision :) I'm still working on it and will update this answer when I'm done. wb_DocumentCompleted() is called multiple times. First when the initial HTML is downloaded, then again when the first round of JavaScript completes. Unfortunately, the site I'm scraping has 3 different loading stages. 1) Load initial HTML 2) Do first round of JavaScript DOM manipulation 3) pause for half a second then do a second round of JS DOM manipulation. For some reason, the second round isn't cause by the wb_DocumentCompleted() function, but it is always caught when wb.ReadyState == Complete. So why not remove it from wb_DocumentCompleted()? I'm still not sure why it isn't caught there and that's where the beadsoftware article recommended putting it. I'm going to keep looking into it. I just wanted to publish this code so anyone who's interested can use it. Enjoy! using System.Threading; using System.Windows.Forms; public class WebProcessor { private string GeneratedSource{ get; set; } private string URL { get; set; } public string GetGeneratedHTML(string url) { URL = url; Thread t = new Thread(new ThreadStart(WebBrowserThread)); t.SetApartmentState(ApartmentState.STA); t.Start(); t.Join(); return GeneratedSource; } private void WebBrowserThread() { WebBrowser wb = new WebBrowser(); wb.Navigate(URL); wb.DocumentCompleted += new WebBrowserDocumentCompletedEventHandler( wb_DocumentCompleted); while (wb.ReadyState != WebBrowserReadyState.Complete) Application.DoEvents(); //Added this line, because the final HTML takes a while to show up GeneratedSource= wb.Document.Body.InnerHtml; wb.Dispose(); } private void wb_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { WebBrowser wb = (WebBrowser)sender; GeneratedSource= wb.Document.Body.InnerHtml; } }

    Read the article

  • Read from file hexadecimal number and change their representation style

    - by user576844
    I want to write a program changing the notation of all hexadecimal numbers found in an assembly source file from traditional (h) to C-style (0x). I have started the coding part but am not sure how can I detect the hexadecimal numbers and eventually change the style and save it back in the file... I have started writing the program.. ## Mips program - .data fin: .ascii "" # filename for input msg0: .asciiz "aaaa" msg1: .asciiz "Please enter the input file name:" buffer: .asciiz "" .text #----------------------- li $v0, 4 la $a0, msg1 syscall li $v0, 8 la $a0, fin li $a1, 21 syscall jal fileRead #read from file move $s1, $v0 #$t0 = total number of bytes li $t0, 0 # Loop counter loop: bge $t0, $s1, end #if end of file reached OR if there is an error in the file lb $t5, buffer($t0) #load next byte from file jal checkhexa #check for hexadecimal numbers addi $t0, $t0, 1 #increment loop counter j loop end: jal output jal fileClose li $v0, 10 syscall fileRead: # Open file for reading li $v0, 13 # system call for open file la $a0, fin # input file name li $a1, 0 # flag for reading li $a2, 0 # mode is ignored syscall # open a file move $s0, $v0 # save the file descriptor # reading from file just opened li $v0, 14 # system call for reading from file move $a0, $s0 # file descriptor la $a1, buffer # address of buffer from which to read li $a2, 100000 # hardcoded buffer length syscall # read from file jr $ra Any help would be appreciated.

    Read the article

  • I am not able to delete a corrupt NTFS partition on my pen drive. How can I force its deletion?

    - by yesuraj
    I formatted my 16GB pen drive with the NTFS file system in windows vista. After that I started copying some files. However, only a few files were copied to the pen drive before the copy operation hung. So I cancelled the copy operation. Now I am unable to use the pen drive. I DON'T REALLY NEED ANY FILES THAT I COPIED TO THE PENDRIVE. I JUST WANT TO USE THE PENDRIVE AGAIN. I have tried using Ubuntu to format the pen drive. But when i use fdisk to delete the partition, it looks like it is working fine but in fact it does not delete the partition. Also I am unable to format it with any other file system. When I tried to use gparted, it throws the following error: Error mounting: mount exited with exit code 14: The disk contains an unclean file system(0,0). The file system wasn't safely closed on window. Fixing ntfs_attr_pread_i:ntfs_pread failed: Input/output error Failed to read NTFS$Bitmap:Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a softRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into windows twice. The usage of the /f parameter is very important!. If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the dmraid documentation for more details When I searched the Internet I found help on how to recover. But I don’t want to recover, I want to format it again. When I pressed w after deleting the partition, it took more time than previously. After that i removed the pen drive and re-inserted, but the partition I had deleted was still present. If I simply type the command fdisk /dev/sdb without removing the pen drive after the partition is deleted, then it returns the error message Unable to open /dev/sdb. Here are the steps that I followed: root@yesuraj-ubuntu:~# fdisk /dev/sdb Command (m for help): d Selected partition 1 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks THE DEMESG PRINTS ARE AS FOLLOWS, [ 6139.774753] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6154.816941] usb 2-1.3: device descriptor read/64, error -110 [ 6169.968908] usb 2-1.3: device descriptor read/64, error -110 [ 6170.158427] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6185.200638] usb 2-1.3: device descriptor read/64, error -110 [ 6200.352572] usb 2-1.3: device descriptor read/64, error -110 [ 6200.542093] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6205.559460] usb 2-1.3: device descriptor read/8, error -110 I used the dd command and it erased the partition table. But now when I connect the pen drive, dmesg contains this error message: [88143.437001] sdb: unknown partition table. I am not able to create a partion using fdisk /dev/sdb. The error message says that it is unable to find the node. Other messages from dmesg follow below. [87100.531596] usb 2-1.3: new high speed USB device number 39 using ehci_hcd [87130.915257] usb 2-1.3: new high speed USB device number 40 using ehci_hcd [87135.932647] usb 2-1.3: device descriptor read/8, error -110

    Read the article

  • Existing open source software for wireless mesh networking?

    - by user70352
    Hello! My goal: Build a wireless mesh network with some ALIX 2D2 (500 MHz AMD Geode LX800 x86 CPU, 256MB RAM, Atheros wireless card) Aside from working like a normal wireless mesh network users should be able to read/write data from/to the ALIX Boxes and the ALIX boxes should be able to process data. Questions: Should I try to flash dd-wrt x86, voyage linux (linux.voyage.hk) or something else? What (open source) software should I look into before I start? Should I use a 'server' for data storage and processing instead of the ALIX boxes? Is it even possible to use the ALIX boxes for routing AND data storage and processing? Final notes: Data can be anything, for example, I want to setup a wireless mesh using the OLSRD protocol so my whole town gets wifi and can access songs on the network. It's not for that, but that's the idea. I'm not afraid of programing, compiling or working with *nix. This is mostly 'for fun' rather than for practicality. Thanks in advance for any feedback.

    Read the article

  • How should I fix problems with file permissions while restoring from Time Machine?

    - by Andrew Grimm
    While restoring files from a Time Machine backup, I got the error message "The operation can’t be completed because you don’t have permission to access some of the items." because of problems with files in one folder. What's the safest way to deal with this? The folder in question has permissions like: Andrew-Grimms-MacBook-Pro:kmer agrimm$ pwd /Volumes/Time Machine Backups/Backups.backupdb/Andrew Grimm’s MacBook Pro/2010-12-09-224309/Macintosh HD/Users/agrimm/ruby/kmer Andrew-Grimms-MacBook-Pro:kmer agrimm$ ls -ltra total 6156896 drwxrwxrwx@ 19 agrimm staff 680 18 Jan 2008 Saccharomyces_cerevisiae -r--------@ 1 agrimm staff 60221852 4 Aug 2009 hs_ref_GRCh37_chrY.fa -r--------@ 1 agrimm staff 157488804 4 Aug 2009 hs_ref_GRCh37_chrX.fa (snip a few files) -r--------@ 1 agrimm staff 676063 27 Oct 2009 NC_001143.fna -rw-r--r--@ 1 agrimm staff 6148 23 Mar 2010 .DS_Store drwxr-xr-x@ 3 agrimm staff 1530 23 Mar 2010 . drwxr-xr-x@ 30 agrimm staff 1054 20 Nov 14:43 .. Is it ok to do sudo chmod, or is there a safer approach? Background: Files within the original folder on my computer also had weird permissions - I suspect I may have used sudo to copy some files from a thumbdrive onto my computer.

    Read the article

  • File association and msconfig broken. Malware?

    - by Moshe
    A friend of mine had an Acer laptop. It has Vista Home Basic. I can't get system properties open. Msconfig does not run. Also, exe filetype is asking me what program to run it with. How can I fix that? I'm running AVG now. Assuming nothing shows up, what are fixes to the above mentioned issues?

    Read the article

  • Change the default route without affecting existing TCP connections

    - by Patrick Horn
    Let's say I have two public network addresses on my server: one NAT through an ISP (192.168.99.0/24), and a VPN through a different ISP (192.168.1.0/24), already configured with a per-host route to the VPN server through my ISP. Here is my initial routing table. I am currently routing through my ISP on subnet 192.168.99.0/24. $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.99.1 0.0.0.0 UG 0 0 0 eth1 55.66.77.88 192.168.99.1 255.255.255.255 UGH 0 0 0 eth1 192.168.99.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 Now, I want new TCP connections to switch to my 192.168.1.0/24 so I type the following: $ route add -net 0.0.0.0 gw 192.168.1.1 dev tap0 When I do this, it causes some long-standing TCP connections to hang. Is there a way to I safely change the default interface for new connections, while allowing existing TCP connections to use the old route (i.e. do I need enable some sort of stateful routing table)? I am okay with a solution that only works with established TCP connections, and I don't care how hacky it is. For example, if there is a way to add temporary iptables rules for existing connections to force them over the old route. But there has to be some way to do this. EDIT: Just a note about a simple "route add -host ... " for existing connections: this solution would work if I am fine with leaving a subset of IPs on the old interface. However, in my application, this actually doesn't solve my problem because I want to allow new connections to come on the new interface even if they have the same source IP. I'm now looking at using the "ip route" command to set source-based routing rules.

    Read the article

  • Open Source Chef Server can't upload cookbook

    - by veilig
    I just setup the open source chef server on an Ubuntu 12.04 EC2 instance, I've setup my webui and am able to get responses from my knife commands ie: knife node list, knife client list, knife user list, etc... I'm able to update roles, databags, environments, etc... but I cannot upload any cookbooks. I'm running my workstation on Mac OSX. I keep getting this output at the end of my command knife cookbook upload -VV curl. Doesn't matter what cookbook I upload, or if I upload them all - I keep getting the same response DEBUG: Chef::HTTP calling Chef::HTTP::ValidateContentLength#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::RemoteRequestID#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Authenticator#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Decompressor#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::CookieManager#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::JSONToModelOutput#handle_response /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http/json_output.rb:51:in `handle_response': undefined method `chomp' for nil:NilClass (NoMethodError) from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:229:in `block in apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `each' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `inject' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:144:in `request' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:118:in `put' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/cookbook_uploader.rb:123:in `block in uploader_function_for' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `call' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `block (3 levels) in process' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `loop' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `block (2 levels) in process'INFO: HTTP Request Returned 204 No Content:

    Read the article

  • Compiled git from source, cannot access subversion repositories using git-svn

    - by haydenmuhl
    I'm setting up a CentOS dev box, and need git. At first I tried to install git using yum, but I could not connect to a yum repository that had git. Next I downloaded the git source (version 1.7.1) and compiled it. When I run the following command git svn clone svn+ssh://... I get the following error. Initialized empty Git repository in /root/main_ec/.git/ Can't locate SVN/Core.pm in @INC (@INC contains: /usr/local/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi /usr/local/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib/perl5/vendor_perl/5.8.8/i386-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib/perl5/5.8.8/i386-linux-thread-multi /usr/lib/perl5/5.8.8 .) at /usr/local/libexec/git-core/git-svn line 41. It looks like git-svn uses perl in some manner, but I don't know packages I'm missing. Can anyone help?

    Read the article

  • Nautilus cannot move to trash

    - by amorfis
    Thing takes place on ubuntu. I want to move a file to trash. I am not the owner of the file, but file belongs to root:samba, and I am member of samba group, and file permissions are rwxrw-r-- There is message "Cannot move file to trash, do you want to delete immediately?". Nothing more. Why can't I move it to trash?

    Read the article

  • How to create a service running a .bat file on Windows 2008 Server?

    - by abyx
    I've created the service using sc create myService binpath=myservice.bat But when I start it, it fails with the following error message: [SC] StartService FAILED 1053: The service did not respond to the start or control request in a timely fashion. On Win2k3 I used the srvany.exe from the Resource kit, but there's no resource kit for win2k8. For the time being I've installed the srvany.exe on my machine, but I don't think that's the best way to do it. Thanks!

    Read the article

  • Batch convert *.avi files using ffmpeg

    - by Darius
    I am trying to convert 20+ .avi files in a batch using ffmpeg. I've got the following @echo off. for file in *.avi do ffmpeg -i "$file" -s 640x480 -vcodec msmpeg4v2 "'basename "$file" .avi'.mpg'; done in my .bat file but it does not work. How can I make it work under Windows OS. Oh, and yes all the files are in the same folder. The error message I get: File was unexpected at this time

    Read the article

  • What are some of the best answer file settings for a WDS Deployment?

    - by drpcken
    I've had my head buried in answer files for days now and have gotten quite comfortable setting them up, test, etc... I use a handful of Components to help my migrations, for my unattend.xml I like: Windows-International-Core-WinPE -- this is good for setting Locales the preboot environment (en-us for us english US speakers). Keeps me from having to set these on the initial image boot. Windows-Setup_neutral -- I like the WindowsDeploymentServices -> ImageSelection, especially if I'm only pushing a single image. This keeps me from having to select it each time. My OOBE_Unattend.xml is really useful and I barely have to touch anything during this part of the installation: Windows-Shell-Setup_neutral -- This lets me put a ProductKey in for my MAK volume license (very useful and time saving). I can also set the TimeZone for the installation. Windows UnattendedJoin_neutral -- I couldn't live without this component. It joins the machine on my domain before logging in as a domain administrator. I would hate to not have this ability. Windows-International-Core -- Again this component really speeds up the OOBE process. I configure my locals and time zone so I don't have to do it by hand when the machine enteres OOBE. Windows-Shell-Setup -- Allows you to configure an autologon when the new machine is finished. I like to logon as a domain admin automatically for customizing and troubleshooting the new machine immediately after it is imaged. Also the OOBE component under here lets me skip the EULA, Hide Wireless Setup, and set my default NetworkLocation. All of this makes the entire OOBE totally automated. What are some other good components I am missing as far as helping me get these images pushed and configured as quickly as possible?

    Read the article

  • How to compare mp3, flac audio data in a file, ignoring header data (ID3 tag) etc.?

    - by Rob
    I've backed up some audio files up in 2 places and added ID3 tags into one backup but not the other, since time has passed my own memory has faded on whether the backups are actually the same, but now one has ID3 data and the other doesn't, basic binary compare will fail and inspection will be cumbersome. Is there a tool to compare just the audio data (not the header, ID3) in mp3s, flac files, and other files using header data such as ID3. started a thread on beyond compare here: http://www.scootersoftware.com/vbulletin/showthread.php?t=7413 would consider other comparison software that does this task

    Read the article

  • File Server - Storage configuration: RAID vs LVM vs ZFS something else... ?

    - by privatehuff
    We are a small company that does video editing, among other things, and need a place to keep backup copies of large media files and make it easy to share them. I've got a box set up with Ubuntu Server and 4 x 500 GB drives. They're currently set up with Samba as four shared folders that Mac/Windows workstations can see fine, but I want a better solution. There are two major reasons for this: 500 GB is not really big enough (some projects are larger) It is cumbersome to manage the current setup, because individual hard drives have different amounts of free space and duplicated data (for backup). It is confusing now and that will only get worse once there are multiple servers. ("the project is on sever2 in share4" etc) So, I need a way to combine hard drives in such a way as to avoid complete data loss with the failure of a single drive, and so users see only a single share on each server. I've done linux software RAID5 and had a bad experience with it, but would try it again. LVM looks ok but it seems like no one uses it. ZFS seems interesting but it is relatively "new". What is the most efficient and least risky way to to combine the hdd's that is convenient for my users? Edit: The Goal here is basically to create servers that contain an arbitrary number of hard drives but limit complexity from an end-user perspective. (i.e. they see one "folder" per server) Backing up data is not an issue here, but how each solution responds to hardware failure is a serious concern. That is why I lump RAID, LVM, ZFS, and who-knows-what together. My prior experience with RAID5 was also on an Ubuntu Server box and there was a tricky and unlikely set of circumstances that led to complete data loss. I could avoid that again but was left with a feeling that I was adding an unnecessary additional point of failure to the system. I haven't used RAID10 but we are on commodity hardware and the most data drives per box is pretty much fixed at 6. We've got a lot of 500 GB drives and 1.5 TB is pretty small. (Still an option for at least one server, however) I have no experience with LVM and have read conflicting reports on how it handles drive failure. If a (non-striped) LVM setup could handle a single drive failing and only loose whichever files had a portion stored on that drive (and stored most files on a single drive only) we could even live with that. But as long as I have to learn something totally new, I may as well go all the way to ZFS. Unlike LVM, though, I would also have to change my operating system (?) so that increases the distance between where I am and where I want to be. I used a version of solaris at uni and wouldn't mind it terribly, though. On the other end on the IT spectrum, I think I may also explore FreeNAS and/or Openfiler, but that doesn't really solve the how-to-combine-drives issue.

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >