Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 639/1620 | < Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >

  • Puppet exported resource naming

    - by Tim Brigham
    I am working on setting up a collection of Splunk nodes to be deployed by Puppet. One of the steps in this process is importing the trusts to allow these nodes to automatically find each other. I've looked over several options and it appears that exported resources are the only ready way to go for this to work. The files I need to create are under /opt/splunk/etc/auth/distServerKeys//trusted.pem. The source for each of these files should be /opt/splunk/etc/auth/distServerKeys/trusted.pem, one per node. What syntax do I need to make this work? The samples I've looked at all appear to have the same source and destination file name.

    Read the article

  • Is there an image viewer which *won't* show every file in the current folder?

    - by hawbsl
    Looking for a Windows image viewer which can be started from the command line but which allows me to specify/restrict which files I want it to page through. As parameters. The good 'ol Windows Picture Viewer would be fine except that it'll show/cycle through all the pictures in the current folder. In my case I want to say something like: someimgvwr.exe "cat.jpg" "cow.jpg" "cub.jpg" so that only those three files will be displayed and not "pig.jpg" which might also happen to be there in the same folder. Actually, if it allowed something like this: someimgvwr.exe "c*.jpg" that would be even more perfect. Do any of the many image viewers that are out there allow such a thing?

    Read the article

  • Using Windows Azure storage for backup

    - by Bruno
    I am currently looking at Windows Azure blobs as an option for backing up archive data. I want to be able to upload files from an external windows machine via the internet but I don't know enough about Windows Azure storage to make a decision. Some of the questions I have are How do I upload the files. Is there a client application, can I use robocopy? Would it be fast enough? i.e. Could I download or upload 1TB of data in a week? Is it secure? Hopefully someone smarter than me can help me :-)

    Read the article

  • Why can I not access any file or directory created by PHP from FTP-client?

    - by user43053
    Hello there, If I create a directory with mkdir(), or create a file with fopen(), file_put_contents() or SimpleXMLElement::asXML(), I am unable to access the file with my FTP-client or c-Panel File Manager. If I try to delete or edit them, I get errors. Dreamweaver suggests it is a permission problem or a network or filesystem fault (but I've set the permissions with chmod() to 0777, and when I check the cPanel, it confirms chmod 777. I also tried to use fileowner() and the function returns int(99), the same owner as those files that I could access with my FTP-client. It seems files and directories created with PHP can only be modified or be deleted with PHP. I thought this must be a server setup related issue, so I write it here. I am on a shared server, and I have no idea about setting up servers. Thank you for your time. Kind regards Marius

    Read the article

  • How can I make my Ubuntu server accessible to the internet ?

    - by wahid
    Hi, I already installed applications to make my server web server. when I type the DHCP released ip address in the web browser, i can access it but all it says is "it works....etc". I can copy files to /var/www successfuly using WINSCP but yet, i can not see any files when I connect to it using my windows machine in the browser. Secondly , I tried to forward port on my home SMC router, it only accepts local lan ip which my ubuntu server picks up internet ip from router...what should i do ? can you help please ???? Thanks,

    Read the article

  • How to search for a folder from the Windows 8 Start screen

    - by Edward Brey
    In Windows 7, if you press the Windows key and type the name of a folder, and the folder shows up among the Start menu search results. In Windows 8, if you do the same thing, no folders are listed. The Files filter shows files with matching names, but no folders. I realize that you can still search for folders from the Windows Explorer search box, but navigating that way is a bit slow and clumsy. Is there a quicker way, in particular a way to search directly from the Windows 8 Start screen?

    Read the article

  • Recommend a web file sharing software please.

    - by Baczek
    I'm looking for a web platform to put company files at. My requirements are: should be accessible via a browser should be open source must be installable (dropbox is a no-go) must have an option to put a access time limit on a file must perform garbage collection automatically after a file expires must be able to mark files as public or private an option to protect a file via a pin-code for users without accounts in the system would be nice to have The problem is I don't even know what to search for - all my googling results in either complete groupware solutions or p2p file sharing software. If such a thing doesn't exist, please don't hestitate to say so, so I can crawl to a corner and cry myself to sleep. TIA

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • Backing up data (including mysqldumps) to S3

    - by seengee
    We have a web app on a number of servers and we want to add an additional layer of redundancy by backing up the key data to S3. The key data is the MySQL database and a folder containing dynamically created site assets - predominantly images. Some kind of rsync based solution would initially seem the best plan. A couple of years ago we played with S3cmd (in particular s3cmd sync) with some success but we didn't find it particularly reliable although this may have changed since. Its occurred to me though that a rsync solution might not work particularly well with a single db.sql file created with mysqldump and I assume this means the whole database getting transferred each time, with multiple databases of over 1GB this is going to add up to a lot of traffic (and $s) very quickly. With the image files I could simply just transfer files modified within the last day which would be far more simple. What approach should I look at?

    Read the article

  • Proper way to connect SATA and IDE Hard drives together?

    - by Bartek
    I have an old IDE hard drive that has a broken Windows install on it. It just won't boot up, and I've tried a variety of solutions. That's fine, I really just need a few files on the hard drive. I have a computer that uses a SATA connected hard drive. It's a working PC. I would like to connect the old IDE hard drive to that compute and basically browse through the file system, grab the files, and copy them to my existing computer. My problem is with my few attempts to connect the IDE drive I would get Boot Disk Failures and so forth. I guess it's trying to boot from the IDE but I'm not really sure. Any help would be appreciated, Thanks!

    Read the article

  • Corrupted file, hard drive test?

    - by all-R
    Hi guys, I'm currently on a macbook with a 1TB external hard drive connected trough a USB hub wich is connected on my macbook. The problem is, my disk, wich is partitioned in 2 (one HFS+ and one NTFS) keeps getting corrupted, recently it was my HFS+ partition, I could not repair it using the Apple's Disk utility, but was able to backup my files. Is it synonym that my hard drive is failing? Is it because of my USB hub? I also keep all my iTunes library on my external HD (HFS+ partition), and did a lot of transfer lately, adding files, removing etc. the last time, my partition got corrupted after a lot of deleted items. If anybody has an idea of what to check first, what could cause the problem, I would appreciate it :) Thanks!

    Read the article

  • SQL Server 2008 R2 Writing To Text File

    - by zzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
    I used to write to text files from SQL Server using the code listed below: DECLARE @FS INT --File System Object DECLARE @OLEResult INT --Result message/code DECLARE @FileID INT --Pointer to file --Create file system object (OLE Object) EXECUTE @OLEResult = sp_OACreate 'Scripting.FileSystemObject', @FS OUT IF @OLEResult <> 0 PRINT 'Scripting.FileSystemObject.Failed' -----OPEN FILE----- EXECUTE @OLEResult = sp_OAMethod @FS, 'OpenTextFile', @FileID OUT, @FileName, 8, 1 IF @OLEResult <> 0 PRINT 'OpenTextFile.Failed' It appears this is no longer supported in sql server 2008 r2. How should I export to text files in sql server 2008 r2? Link claiming this is no longer supported: http://social.msdn.microsoft.com/Forums/en/transactsql/thread/f8512bec-915c-44a2-ba9d-e679f98ba313

    Read the article

  • Deleting file in Samba doesn't delete file?

    - by Jeff Welling
    Why, when I delete some files in Samba, does it not delete them but instead merely change their filename from filename.txt to ._filename.txt? This is not the behaviour one would expect when "deleting" a file, so I'm wondering if there's an option I forgot to set somewhere in the samba config. It does this to some files but not to others, I have not yet spotted a pattern to its choosiness. There is a Ubuntu 12.04 machine and a Mac OS X machine which have write (and thus delete) capability, no Windows machines have write permission.

    Read the article

  • What to do when you can not type a letter in Cygwin/bash

    - by Stenemo
    I had a very strange issue that happened as I was editing .bashrc or possible .profile, which made it impossible to press the letter "a" (it is not showing up on screen, although I am able to type it in all other programs as usual. I am not sure, but I was trying to get aliases to work on my computer at the time, so it is possible that I somehow aliased a to "", although I am not sure how that would have happened. I solved this by copying all the files in "cygwin\etc\skel\" (these are the backup starting files in case you ever need to replace them) into my home folder. Just leaving this question here so that other people which run into the same problem know what to do, not sure why I am unable to press "solve your question" at the moment, but I hope that someone who reads this knows how to edit this question so that the next person with this problem knows what to do. Also, not sure if this belongs in this forum or another one, but guess it is more of a unix question.

    Read the article

  • MySQL gzipped Export in PhpMyAdmin has wrong size in Mozilla

    - by Michal Gow
    That is really strange. I am using PhpMyAdmin 2.11.9.6 on Linux hosting. While I am Exporting databases using "gzipped" compression in Mozilla, I am getting files which have size of uncompressed database, but they seems to be downloading in incredible speed (10 times quicker than is possible using my ISP). So at the end: for database of 10M size I am getting 10M gzip downloaded in miniseconds it has indeed shown 10M size on drive it is corrupted Zip compression is working just fine (I am getting file with cca 1M size with fine content of compressed database) And the weirdest thing: that is happening for Mozilla Firefox (13.0.1) only, Internet Explorer 9 is downloading correct gzipped files... Any hint?

    Read the article

  • how to properly edit hosts, hostname and resolf.conf?

    - by Firewall
    i,v been searching the internet for a real noop tutorial on the subject but could not found any direct info. on how to edit these files the proper way. i,v got a debian internet server that i use to host some personal domains and runs squid and rTorrent. the server is up and running with no problems but i am confused about a few things. lets say that i named my server (foo), my domain is (example.com) and my public IP is 95.211.133.200 now: should /etc/hostname contains: tango.example.com or tango <----- just the server name should /etc/hosts contains: 127.0.0.1 localhost.localdomain localhost 95.211.133.200 foo.example.com foo should /etc/resolf.conf contains (along with the nameservers) both: domain example.com search example.com or just the first one. are there any other files that i should edit in order to make things right? last thing, the command: domainname returns: (none) i believe it should return (example.com). what should i do to correct that?

    Read the article

  • Steps to install solely ubuntu 13.04 on Dell inspiron 14z ultrabook with SSD+HDD

    - by rishy
    I have tried a few things like disabling the Intel smart response, choosing AHCI in BIOS. But there are certain problems I am still facing. I can't see my SSD during the installation of ubuntu (I am planning to install Ubuntu on my SSD and other files on HDD). When I run Ubuntu my laptop gets overheated and battery backup reduces to 90 minutes. (I guess it's related to my graphic driver ATI Raedon HD 7570). Cooling fan seems to run at its fullest, it was working much better in windows. So, overall I wanted to know what are the exact steps I need to follow to install Ubuntu on my SSD and then use my HDD to keep other files, How can I get rid of overheating and battery backup problem?

    Read the article

  • How to unmangle PDF format into a usable text or spreadsheet document?

    - by Chuck
    Upon requesting some daily/hourly sales data from a coworker who is responsible for such requests, I was given a series of PDF files. The point of sale program that is used, for some reason, answers requests for this type of information in the form of PDF files. The issue: The PDF files look to be in a format that should easily be copy and pasted into a spreadsheet. There are three columns that look to be neatly organized across two pages. When copy/pasting the first page, all three columns from the PDF's first page are dumped into a single column consisting of the Date followed by the Hours for the transactions on that day. The end of this Date/Time information is followed by all of the Total Sales values that should be attached a Date and Time of the transaction. (NOTE: There are no duplicated Dates in the Date column, ie, Multiple transactions for a day only have one yyyy/mm/dd listed for the first row but not the following rows.) While it was a huge pain, it was possible to, in about four or five steps, get the single column of data broken out into three columns that matched the PDF. The second page of the PDF file, when attempting to copy/paste into a spreadsheet, creates a single column with the first third of the cells being the Dates from the PDF, the second third of the cells being the Hours of the transactions and the final third of the cells being filled with the Total Sales. After the copy/paste there is no way to figure out which Hours belong to which Dates or Total Sales due to the lack of the duplicated Dates in the Date column as mentioned above. My PDF-fu is next to non-existent. I've just now started to work with PDF editors and some www.convertmyPDFforfree.com websites, so far, with absolutely nothing remotely coming anywhere near usable output. (Both methods have so far done nothing but product blank documents.) Before I go back and pester my co-worker into figuring out a way to create a report in some other format than PDF, is there any method by which to take the data that looks to be formatted correctly in a PDF and copy/paste it into a spreadsheet that will look the same? I appreciate any help that can be made available. The sales data isn't so sensitive that I couldn't part with a bit to let somebody actually see what it is that needs to be dealt with, just let me know. The PDF's are less than 100kb each so sending them shouldn't be a burden to any interested party.

    Read the article

  • Accidentally deleted the software for MyPassport Essential SE 1TB Hardrive

    - by user26192
    I'm posting for a friend of mine. She bought a WD MyPassport Essential SE 1 TB Hard drive the other day. When she plugged in the USB in her lap top, the driver cannot be recognized by the smart ware software. While she was doing a back up of her files, McAfee was running in the background. Since the backup was taking so long to finish, she decided to pause it. She tried to delete the partially backed up files, but instead, she accidentally deleted the entire file in the folder including the pre-installed software. Now, when she tries to start up the MyPassport, the smart ware doesn't show up anymore. Can someone please give us advice what can she do about this? Thank you.

    Read the article

  • using "touch" to create directories?

    - by user66732
    1) in the "A" directory: find . -type f a.txt 2) in the "B" directory: cat a.txt | while read FILENAMES; do touch "$FILENAMES"; done 3) Result: the 2) "creates the files" [i mean only with the same filename, but with 0 Byte size] ok. But if there are subdirs in the "A" directory, then the 2) can't create the files in the subdir, because there are no directories in it. Question: is there a way, that "touch" can create directories?

    Read the article

  • using "touch" to create directories?

    - by user62367
    1) in the "A" directory: find . -type f a.txt 2) in the "B" directory: cat a.txt | while read FILENAMES; do touch "$FILENAMES"; done 3) Result: the 2) "creates the files" [i mean only with the same filename, but with 0 Byte size] ok. But if there are subdirs in the "A" directory, then the 2) can't create the files in the subdir, because there are no directories in it. Question: is there a way, that "touch" can create directories?

    Read the article

  • SQL server environment

    - by Olegas D
    Hello I'm considering a bit of changes in current sales environment. And trying to check all cons and pros. Current situation. SQL server (quite decent HP server - server1) + backup server (smaller Dell server - server2). all sql files and sql server itself are on the server1. If something goes wrong with server1 I will have to manually move to server2. Connecting to the sql server: 1 HQ (where server located) + 4 sites through VPN. Now I'm considering 2 scenarios: Buy some storage system + update existing servers (add ram, upgrade processors) and go for VMWare ESXI. Rent a server at a datacenter + rent virtual server in case real server goes down. Also rent some space at data storage to keep SQL files there. Have anyone considered these things and maybe found some good pros/cons list? ;) Thanks

    Read the article

  • Best Practice: Migrating Email Boxes (maildir format)

    - by GruffTech
    So here's the situation. I've got about 20,000 maildir email accounts chewing up a several hundred GB of space on our email server. Maildir by nature keeps thousands of tiny a** little files, instead of one .mbox file or the like... So i need to migrate all of these several millions of files from one server to the other, for both space and life-cycle reasons. the conventional methods i would use all work just fine. rsync is the option that comes immediately to mind, however i wanted to see if there are any other "better" options out there. Rsync not handling multi-threaded transfers in this situation sucks because it never actually gets up to speed and saturates my network connection, because of this the transfer from one server to another will take hours beyond hours, when it shouldn't really take more then one or two. I know this is highly opinionated and subjective and will therefore be marked community wiki.

    Read the article

  • Speeding Up Search On Ubuntu File Server Accessed Through Windows

    - by John Birdy
    I run an Ubuntu box as a media server, which I use to either share files (copy and paste off of the network drive), or stream to my computer (which runs Win7), or to my xbox. I have a lot of files on there, especially music. Currently when I'm searching for a file, I just use Windows' search, which can be quite slow. I was wondering if there were better ways to search from my Windows box? I'd prefer not to SSH in to the box and use find or something like that. Is there any way to speed up Windows' search? Or an easy alternative? Thanks!

    Read the article

  • Mac failing (failed?) hard drive - is all hope lost?

    - by Daniel
    It's a 500 GB Seagate laptop hard drive that came with my Macbook Pro. Apple partition format. Already replaced and now have it external, connected via SATA/USB adapter. Trying to get just a few files that I worked on while out of town when it crashed (and thus did not have my time machine backup drive). Drive will not mount, but OS X Disk Utility detects it and can read the capacity, model number, and even the name of the partition, which leads me to believe all hope may not be lost. Failed attempts so far: Disk Utility verify+repair says drive cannot be repaired and that I should back up immediately (lovely) Disk Warrior says it cannot rebuild the directory due to hardware failure Data Rescue quick & deep scans immediately failed PhotoRec says "error reading sector" for every sector (at least for the few minutes I let it run before closing it to explore other options) What else can I try here? Again, I'm just looking for a few, small files (python scripts to be specific) - not a full recovery.

    Read the article

< Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >