Search Results

Search found 51448 results on 2058 pages for 'log files'.

Page 325/2058 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • What's the fastest way to store/access large files?

    - by philfreo
    I do a lot of video editing on my Mac and need a way to store very large (30 GB) files, and don't have room on my HD. A USB/Firewire external hard drive would work, but it seems way too slow for consistently working with such large files. I've also considered buying another computer, with a large hard drive, and putting it on the same network with a shared folder. What's the fastest / most efficient way to do this? Please consider USB 2.0 speeds, hard drive read times, ethernet speeds, etc. Are there other options I should consider?

    Read the article

  • How do I extract files from one tarball to another tarball in one step?

    - by Martin
    I have some fairly large tarball archives, from which I need to extract some files. I will later repack those files to transfer them to another server. Currently that is a two (multi) step process for me: mkdir ttmp tar -vxzf large.tgz -C ttmp/ --strip-components=<INT> <folder-to-be-extracted> or alternatively with wildcards mkdir ttmp tar -vxzf large.tgz -C ttmp/ --strip-components=<INT> \ --wildcards --no-anchored '*pattern*' Then I go ahead and recompress the created folder tar -vczf small.tgz ttmp/* rm -rf ttmp How can I combine these two commands into one? Like this tar -x large.tgz > tar -c small.tgz Just to show, what I already tried: Whenever I search the terms "extract" I will end up here or here or even here. When I use the term "split" I will end up here and that is definitely not what I intend to do. When I use "repack" I end up in strange places.

    Read the article

  • How can I share files from my Windows 7 machine to my friend's Ubuntu machine?

    - by ProfKaos
    I run a Windows 7 Pro SP1 laptop as my home machine, and my housemate runs an Ubuntu 12.04.1.05 desktop. We share a WLAN. I would like to make certain locations and files available for him to read and maybe write. How can I go about this? Bearing in mind I have very little recent experience with modern Linux, and Ubuntu in particular. My first idea is to share a Windows folder with my Ubuntu VM under VMWare Player, then his Ubuntu machine can connect to my Unbuntu VM, and the two can use whatever magic Ubuntu uses to achieve file sharing. This requires my Ubuntu VM to be always running though, and that may not always be possible. I have also heard that Samba may have a feature to help here, but I know nothing about that. How can I share my Windows files with my mate's Ubuntu machine, preferably with a 1 to 1 connection, i.e. rather not using shim VM's

    Read the article

  • How do I recover files from a corrupt VDI file?

    - by Eric P
    Is it possible to repair a corrupt VDI file? The OS on the VDI (XP) doesn't boot at all, it just hangs at a black screen. I was getting file errors before on its last boot, but now its not working at all. Sector viewer shows 'Invalid partition table Error loading operating system Missing operating system'. I tried mounting the file from the host OS, but it just says that the drive isn't formatted. I don't need to be able to run the VDI, but I do need some files that are on it. Is there any way to recover files from the corrupt VDI file?

    Read the article

  • Adobe Acrobat: How to batch to combine multiple pdf files?

    - by Andrei Andre
    I have 3 folders: Folder 1 Folder 2 Folder 3 In each folder I have 5 pdf files: Folder 1 file1.pdf file2.pdf Folder 2 file1.pdf file2.pdf Folder 3 file1.pdf file2.pdf I want that in each folder to have a combined file of those two files: Folder 1 binder.pdf Folder 2 binder.pdf Folder 3 binder.pdf Any idea? Don't tell to do it manually. This case is just to explain you my problem. Think that I have hundreds of folders. :) Maybe I can use another tool instead of Adobe Acrobat?!

    Read the article

  • Seeking a solution to automatically copy files from the cd-rom disk to the USB drive once it's connected.

    - by Ray Nathan
    I plan to distribute a free CD that automatically copies files to a connected usb device. This process will be done on the computers of the users that obtain the cd. The CD will contain an autorun.ini file that will instruct the computer to copy a set of files located on the cd..to a specific directory on the connected usb device. The usb drive letter is not the same on all the systems, therefore...Windows XP should automatically know the drive letter of the usb device before the copy operation begins. What would be the best way of creating a short batch file or script that I can place on the CD to execute this process? Also, please note that it is NOT feasible or recommended to include a batch file on the USB devices to sync this operation due to the explanation at the beginning of this paragraph. :) Thank You All

    Read the article

  • Why does the command forfiles list files that are not a day old despite the command being otherwise?

    - by PeanutsMonkey
    The command I am executing is forfiles -p"C:\testdata" -m*.* -d-1 -c"cmd /c echo @PATH\@FILE" I have specified that I wish to only list files that are a day old however when I execute the statement, it returns me a list of files that were created today. Why is that? Am I doing something wrong? Would it be better to specify a time period as opposed to a date e.g. 24 hours? The version of forfiles I have reads as follows FORFILES v 1.1 - [email protected] - 4/98. The batch file is being run on Windows XP.

    Read the article

  • How to print TIFF files using MSFT Office Document imaging?

    - by Think Floyd
    OS: Vista and Windows7 I have Microsoft Office Document Imaging installed. .tif and .tiff files association is set to " Microsoft Office Document Imaging" When I open a TIFF file, it opens in " Microsoft Office Document Imaging". Good so far. However, when I right-click on the TIFF file and invoke print, I see a "Print Pictures" dialog, ("How do you want to print your pictures?") I have some applications installed on my machine that print incoming TIFF files on the printer. They work fine on XP. However, on Vista and Windows7, I get this "Print pictures" prompt requiring an user intervention (i.e, click on Print button). How do I get rid of this "Print Pictures" prompt?

    Read the article

  • How can I evaluate the best choice of archive format for compressing files?

    - by Mehrdad
    In general, I've observed the following: Linux-y files or tools use bzip2 or gzip for distributing archives Windows-y files or tools use ZIP for distributing archives Many people use 7-Zip for creating and distributing their own archives Questions: What are the advantages and disadvantages of these formats, all of which appear to be open formats? When/why should I choose one (say, 7-Zip) over another (say, ZIP)? Why does the trend above appear to hold, even though all of these are portable formats? Are there any particular advantages to using a particular archive format on a particular platform?

    Read the article

  • What is the best filesystem for storing thousands of files in one dictionary-like id-blob structure?

    - by Ivan
    What filesystem best suits my needs? Thousands or even millions of files in one directory. Good (ext4 & ntfs level or close) reliability (incl. fault tolerance) and access speed. No directories actually needed, as well as descriptive names, just a dictionary-like structure of id-blob pairs is all I need. No links, attributes, and access control features needed. The purpose is a file storage where all the metadata (data describing all the facts about what the file actually contains and who can access it) is stored in a MySQL database. As far as I know common filesystems like NTFS and ext3/4 can go dead-slow if there are too many files placed in one directory - that's why I ask.

    Read the article

  • How can I audit a Linux filesystem for files which have been changed or added within a specific time

    - by Bcos
    We are a website design/hosting company running several sites on a Linux server using Joomla 1.5.14 and recently someone was able exploit a vulnerability in the RW Cards component to write arbitrary files/modify existing files on our filesystem enabling them to do some nasty things to our customers sites. We have removed vulnerable modules from all sites but are still seeing some problems. We suspect that they still have some scripts installed and need a way to audit anything that has been changed or added in the last 10 days. Is there a command or script we can run to do this?

    Read the article

  • How do I set the umask for files and directories created from the GUI in MacOS X Lion (10.7)?

    - by Avry
    I've set my umask in my .bashrc file to 007. Any files created on the command line after loading my bashrc file respects this setting. I want to be able to set the umask to 007 for any files created using non-command line apps. This document talks about setting the umask via launchd. And it kind of works. If I follow these directions I can change the default permissions on a GUI created file from rw-r--r-- to rw-rw---- but the directories still are not group writeable (i.e. I want them to be rwxrwx--- but they are rwxr-x--- instead) The analog on Linux would be /etc/login.defs as the place to set the umask. What do I change in order for the umask to be set properly (i.e. the way I want it)?

    Read the article

  • How do I catalog files on several external hard drives that I want to store off-line? OSX

    - by raudi
    My partner, an artist, has more than 10 external hardisks both USB and firewire and every 2-3 months a new one has to be added (She's working with videos and pictures) currently its 10TB and growing so too much for a affordable NAS. Right now the files are not indexed and I think can not be searched with spotlight because not all drives can be connected at the same time. So if she wants to search for a file, she has to guess which disk/disks (based mostly on the date) and then search several drives. Now I'm looking for a solution to index/catalog the drives, something like GentibusCD Cathy Disclib (all these solutions are unfortunately Windows only) Is there any software for OSX that will catalog all the hard drives, so she can search the catalog, find the files, and get the ID of the disk / disk name that has the content? Preferably something with a GUI so my partner can also use it easily Preferably with Thumbnails for pictures/videos (But even an equivalent to "tree /F /A" would be better than nothing)

    Read the article

  • Adding text to the beginning and end of a number of files?

    - by John Feminella
    I have a number of files in a directory hierarchy. For each file, I'd like to add "abcdef" to the beginning, on its own line, and "ghijkl" to the end, on its own line. For example, if the files initially contained: # one/foo.txt apples bananas # two/three/bar.txt coconuts Then afterwards, I'd expect them to contain: # one/foo.txt abcdef apples bananas ghijkl # two/three/bar.txt abcdef coconuts ghijkl What's the best way to do this? I've gotten as far as: # put stuff at start of file find . -type f -print0 | xargs -0 sed -i 's/.../abcdef/g' # put stuff at end of file find . -type f -print0 | xargs -0 sed -i 's/.../ghijkl/g' but I can't seem to figure out how what to put in the ellipses.

    Read the article

  • Why is ext3 so slow to delete large files?

    - by Janis Peisenieks
    I have a server, which makes an incremental backup of a system every night. Now on saturdays, there is a full backup. But after the full backup has finished, a script kicks in, that deletes the incrementals. Now, the script sometimes breaks, and it is because the incrementals are each about 10GB files, and sometimes takes too long for the script. Now could someone explain to me, or point me in the direction of a resource, that explains why ext3 is so slow to delete files, when compared to, lets say, NTFS? I know theses are 2 completely different file systems, but I'm really interested why is there such a big difference in deletion?

    Read the article

  • Nginx (for static files) and Apache (for dynamic content)?

    - by matthewsteiner
    So, my entire application runs on apache just fine. However, I want to test how much the requests per second increases if I put all static files through nginx instead. I found this thread: http://stackoverflow.com/questions/869001/how-to-serve-all-existing-static-files-directly-with-nginx-but-proxy-to-apache-t But I have a couple problems. I'm completely new to nginx, so I'm not sure where to put the configuration. (The file is at /etc/nginx/nginx.conf, but I don't know if I just add the code to the bottom or what?) Also, how can I have both servers running at the same time? Is it because they both listen on port 80? Right now I have to stop one to start the other, and that's as far as I've gotten. Thanks for any help.

    Read the article

  • What is a Windows text editor that will make it easy for me to have four text files open onscreen at once?

    - by Ascendant
    When brainstorming / planning I like to have four text files open onscreen at once: One for notes/stream of consciousness, one for action items to follow up on, one for a rough outline, etc.... What I'm looking for is an easy way to create / save four text files in this manner in Windows. Most importantly, I need the lines to wrap based on the width of the actual window itself. Not based on a ruler or document size (a la Word or WordPad) and not wrapping "manually only" (like Windows' built in Notepad application.) Also, I need the windows to have no, or at least, little, fluff at the top of each document (menubars, ribbons, etc.) On my Mac, I've found that the built-in TextEdit application is almost perfect for this. There's no header or ribbon taking up space for each document, and lines wrap when they hit the end of the window. I haven't had any luck finding a Windows application that works the same way.

    Read the article

  • nginx virtual hosts are not working, all vhosts goes to the default one

    - by Adirael
    Hello, I just did a clean install of nginx + php-fpm on a VPS running Ubuntu 10.10, nginx is serving and PHP is working fine, but I'm not able to add vhosts to it. Well, I can add them, but only one works, the rest go to this first one. This is my first vhost, for host1: server { listen 80; server_name host1; access_log /var/log/nginx/host1.log; error_log /var/log/nginx/host1.error.log; location / { root /var/www/vhosts/host1/; index index.html index.htm index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/host1/$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; } } And the second one, for host2: server { listen 80; server_name host2; access_log /var/log/nginx/host2.log; error_log /var/log/nginx/host2.error.log; location / { root /var/www/vhosts/host2/; index index.html index.htm index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/host2/$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; } } The problem is, when I go to http://host1 everything is fine, but on http://host2, it just shows host1! I don't have Apache installed and everything comes from repos. Any pointers?

    Read the article

  • Lots of files being used by blank web page. What are they?

    - by byronyasgur
    I am trying to optimise a website and I was using the network waterfall facility in Google Chrome. When I looked at the results there were lots of files which I didnt recognise. I first thought they might be something to do with Google Chrome itself, so I put a blank HTML file on my desktop and checked but there was nothing in the waterfall except the file itself. So I put a blank file on my server and I got the output below. What are all these files, are they all necessary, is this normal and do I need to be in any way concerned. My hosting provider has always been excellent in every regard that I'm aware of. My host is shared hosting, using cpanel and is based on a LAMP server. I also note that a couple of those file have problems but I have no idea how to fault find that or whether it's a concern. EDIT: I have cleared the cache so I don't think it's a browser cache issue.

    Read the article

  • Is there a convenient method to pull files from a server in an SSH session?

    - by tel
    I often SSH into a cluster node for work and after processing want to pull several results back to my local machine for analysis. Typically, to do this I use a local shell to scp from the server, but this requires a lot of path manipulation. I'd prefer to use a syntax like interactive FTP and just 'pull' files from the server to my local pwd. Another possible solution might be to have some way to automatically set up my client computer as an ssh alias so that something like scp results home:~/results would work as expected. Is there any obscure SSH trick that'll do this for me? Working from grawity's answer, a complete solution in config files is something like local .ssh/config: Host ex HostName ssh.example.com RemoteForward 10101 localhost:22 ssh.example.com .ssh/config: Host home HostName localhost Port 10101 which lets me do commands exactly like scp results home: transferring the file results to my home machine.

    Read the article

  • How can I copy files to an external drive and verify their integrity in OS X?

    - by jedavis
    I'm moving large amounts of data from one external drive to another larger one. The files are important and the smaller drives need to be cleared and reused (HD camera). Is there some utility for moving files and verifying their integrity? I've been using this command find . -type f -exec md5 '{}' \; > md5list.txt in the terminal to create a list of MD5s for each file then using diff to compare the two. However, I am moving 320GB at a time, which takes a while by itself. Computing the checksums takes another hour or so. It would be much more efficient to do this on the fly, during the copy. I'm just hoping someone has already written the software...

    Read the article

  • is there a way to have VIM do incremental search on text files?

    - by Alex
    Is there a way to have VIM do incremental search on text files? Vim already does incremental search within the currently open file. Examples of programs that demonstrate the type of search I'm trying to accomplish are Notational Velocity for MacOS, Resop for windows or SimpleNote for the web. These apps do an instant or incremental search in the files of a specific directory and make it easy/fast to narrow down the file you are looking for or create a new file. I use both but would rather live in one editor.(that being VIM) Is there some plug in that would do this?

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >