Search Results

Search found 37650 results on 1506 pages for 'files'.

Page 276/1506 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • how sort recursively by maximum fileze and counts files type?

    - by user599395
    Hello! I'm beginner in bash programming. I want to display head -n $1 results of sorting files by size in /etc/*. The problem is that at final search, I must know how many directories and files has processed. I compose following code: #!/bash/bin let countF=0; let countD=0; for file in $(du -sk /etc/* |sort +0n | head $1); do if [ -f "file" ] then echo $file; let countF=countF+1; else if [ -d "file" ] then let countD=countD+1; fi done echo $countF echo $countD I have errors at execution. How use find with du, because I must search recursively?

    Read the article

  • How can I check the location of perl and CPAN files?

    - by Rob
    I constantly have to set up new servers for an employer of mine for an exact purpose of his, and as such they all have to be set up in exactly the same way. So I've created a script in PHP that I run from my own box to automatically send over all the relevant files, compile everything, run updates, and everything else. However, for some reason these brand new servers come with perl, which is fine, but they have perl installed in different locations. This makes it a pain for me to copy over Config.pm for CPAN without going in and finding the location manually. Is there perhaps some command I'm unaware of that will hunt down the precise location? If it helps, usually the servers are CentOS 5

    Read the article

  • How to handle splitting a file under source control?

    - by sharptooth
    I have a .cpp file and .h file containing a class. Class.cpp contains the implementation and Class.h contains the definition. The class is overcomplicated so I want to separate some code and move it into a separate class. So I create NewClass.cpp and NewClass.h and move the code there. How do I handle this when the files are under SVN? I can simply "svn add" the two new files, but then they will appear as new and will have no history. I could instead "svn copy and rename" the two initial files and edit the the two old files and the two new files - then the two new files will have common history. Which approach is better from the point of version control? Should the new files share history with the old files or should they appear as new?

    Read the article

  • Elastix, how to MOVE files from one server to other server?

    - by yudayyy
    In my office, i have to schedule for moving a file from one computer to other computer (Both are using Elastix). My idea is using cron, scp, and rm to do this. So here are the script that i use: scp -r /home/data/* [email protected]:/home/data1 && rm -r /home/data/* That script did the copy, but not remove the source file. I already read this question: Hov to _MOVE_ files with scp? The problem is, the computer doesn't have an internet connection. So i cannot install rsync on my elastix computer. yum install rsync Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile and then it freezes. Any idea how to do this?

    Read the article

  • How do I enable automatic reloading of view files in development mode in JRuby on Rails?

    - by thekingoftruth
    I am developing an app in JRuby on Rails. For some reason, when I edit the view files, the development JRuby Mongrel server doesn't reload them. The perplexing thing is that after editing the controller files, the server reloads them just fine on the next request. This would be annoying even when using MRI Ruby, however starting up JRuby Mongrel after every view edit is much slower, and much more annoying. (Note that once it starts up it's quite fast, the only issue is startup--the JVM has to load up every time I start JRuby Mongrel.) I'm running JRuby 1.5.0, Rails 2.3.5, and Java 6.

    Read the article

  • Why my browsers display XML files as blank pages?

    - by n1313
    Every time I open an XML file, all I get is blank page instead of tag tree. The file itself is correct and loads okay, I can see it via View Source or in the Firebug. I've tried turning off all my addons and tried running Firefox in safe mode, but the problem was not solved. I'm guessing that I've messed up my configuration somehow and Firefox now tries to render XML files as HTML ones. I've tried googling, but with no success. Help, please? UPD: example file: http://lj.lain.ru/3/1273657698603.sample.xml Also I've noticed that somehow all of the browsers on the machine are now acting the same, so I'm changing the question accordingly

    Read the article

  • Can gedit on mac be used to edit files over ssh?

    - by Dave
    I use a linux machine at work and a mac at home. I can ssh from my machine at home to my work machine. But the only editor that I have access to on the command line then is vi, which I don't like. Is there a way to use gedit on my mac to edit files remotely over an ssh connection? This page says that it can be done, but I think that it assumes that you are using gedit on ubuntu. On my mac (os 10.5.8) I don't have the "bookmark" option when I click "connect to server". http://thecodecentral.com/2010/04/02/use-gedit-as-remote-file-editor-via-ftp-and-ssh-ubuntu/comment-page-1#comment-50558

    Read the article

  • How could I portably split large backup files over multiple discs?

    - by sourcejedi
    Context: I make backups / archives, primarily of photos. I'm experimenting with Bup, which is designed for backup to hard disk. Basically it creates Git repos which include packfiles of up to 1GB. But I still need last-ditch backups to keep offline and move offsite (and keeping them on read-only media is good too!). What are the options for archiving and splitting large files over several discs like CDs (and reading them back!)? I'd prefer methods which will stay readable in future. are portable e.g. to Windows. have known simple implementations, so I could re-implement them myself if necessary. (Using Bup packs will stretch my robustness budget. So I want to be confident about how other parts of the system would behave). I heard split archives are possible with both ZIP and 7-Zip. Is that right?

    Read the article

  • Why am I getting "too many include files : depth = 1024"?

    - by BeeBand
    I'm using Visual Studio 2008 Express edition, and keep getting the following error: "Cascadedisplay.h(4) : fatal error C1014: too many include files : depth = 1024. Obviously I'm doing something very wrong with include files, but I just can't see what. Basically, I have an interface class, StackDisplay, from which I want to derive CascadeDisplay in another file: #if !defined __BASE_STACK_DISPLAY_H__ #define __BASE_STACK_DISPAY_H__ #include <boost\shared_ptr.hpp> #include "CascadeDisplay.h" namespace Sol { class StackDisplay { public: virtual ~StackDisplay(); static boost::shared_ptr<StackDisplay> make_cascade_display(boost::shared_ptr<int> csptr) { return boost::shared_ptr<StackDisplay>(new CascadeDisplay(csptr)); } }; } #endif and then in CascadeDisplay.h: #if !defined __CASCADE_DISPLAY_H__ #define __CASCADE_DISPAY_H__ #include "StackDisplay.h" #include <boost\shared_ptr.hpp> namespace Sol { class CascadeDisplay: public StackDisplay { public: CascadeDisplay(boost::shared_ptr<int> csptr){}; }; } #endif So what's up with that?

    Read the article

  • Remove postgres from Mac - installed in /usr/local/, can I just delete files?

    - by Richard
    I want to completely uninstall postgres and start from scratch - the version I have is refusing to work with PostGIS 2.0. I have read other answers on how to do this, but none of them seem to fit the way postgres is set up on this machine. I'm not sure postgres was originally installed on this machine - it wasn't via brew or Postgres.app or the EnterpriseDB installer - but it seems to be living in /usr/local: $ which psql /usr/local/pgsql-9.1/bin/psql The postgres binary itself is in /usr/local/var/postgres/. How can I kill it forever? Can I simply go to /usr/local and do rm -rf pgsql-9.1, and the same in /usr/local/var, and make sure there are no paths in my profile file? Or is there more to it than that? From memory I think I'll need to delete the database files too somehow. Thanks for the help.

    Read the article

  • Symlink across local volumes in webroot?

    - by geerlingguy
    I am looking for a good short-term solution to storage space concerns on my website. Currently, I have all uploaded files (flash video, images, etc.) inside the 'files' directory in my web root (/home/account/public_html/files). That directory is located on my high-speed main hard drive (a 15k SCSI drive). I have another drive with much more capacity, but spinning at 10k rpm (so still fast, but not as good for random reads/writes as the main drive. The entire drive is mounted at /backup Right now I'm just using it as a backup volume. I would like to create a symlink from my /home/account/public_html/files folder to /backup/files, and have all files reside on the second drive. However, if someone accesses a file at http://www.example.com/files/filename.jpg, would it still work if I symlinked to the second drive? (Basically, would Apache/PHP automatically know to follow the symlink for that directory?).

    Read the article

  • How do I prevent capistrano from overwriting files uploaded by users in their own folders?

    - by Hrishi Mittal
    I'm using Capistrano and git to deploy a RoR app. I have a folder under which each user has their own folder. When a user uploads or saves a file, it is saved in their own folder. When I deploy new versions of the code to the server, the user files and folders are overwritten with what's on my dev machine. Is there a way to ignore some folders in capistrano, like we do in git? This post - http://www.ruby-forum.com/topic/97539 - suggests using symlinks and storing the user files in a shared folder. But it's an old post, so I'm wondering if there is a better way to do it now. Also, does anyone know of any good screencasts/tutorials to recommend for using RoR+git+capistrano? Thanks.

    Read the article

  • How to deal with files that are relevant to version control, but that frequently change in irrelevant ways?

    - by Jens Mühlenhoff
    .dproj files are essential for Delphi projects, so they have to be under version control. These files are controlled by the IDE and also contain some information that is frequently changed, but totally irrelevant for version control. For example: I change the start parameters of the application frequently (several times a day), but don't want to accidently commit the project file if only the part dealing with the start parameters has changed. So how to deal with this situation? A clean solution would be to take the file apart, but that isn't possible with the Delphi IDE AFAIK. Can you ignore a specific part of a file? We're using Subversion at the moment, but may migrate to Git soon.

    Read the article

  • using C# how to convert iso8859-1 encoded text files that contain Latin-1 accented characters to utf

    - by Tim
    I am being sent text files saved in iso88591-1 format that contain accented characters from the Latin-1 range (as well as normal ASCII a-z etc). How to convert these files to utf-8 using C# so that the single-byte accented characters in iso8859-1 become valid utf-8 characters? I have tried to use a StreamReader with ASCIIEncoding, and then converting the ascii string to UTF-8 by instantiating an ascii encoding and a utf8 encoding and then using Encoding.Convert(ascii, utf8, ascii.GetBytes( asciiString) ) — but the accented characters are being rendered as question marks. What step am I missing? Thanks

    Read the article

  • Why won't my files push to my SFTP server?

    - by Matthew
    I'm having trouble pushing my branch to an SFTP server. I'm following the instructions here. When I push the branch, everything seems to complete successfully. I get the message "Created new branch.", and if I do "bzr push" again, it says "No new revisions to push." But when I ssh to the SFTP server to look at the directory I put my branch in, only the .bzr directory is there. None of my files are there. Does anyone have any idea why this might be?

    Read the article

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

  • How to retrieve files/documents that are not found on the web server machine.

    - by jhorton
    I am trying to create an export feature for a user to be able to download documents into a zip file. I have the feature working when the files are located on my local and I can use an absolute path on my local. But after talking to the infrastructure team, I found out that the documents are not stored on the same machine as the web server but located at a server farm located off site. I can query the database which gives me a file path. But the path is more of a relative path. So can anyone help me understand how to use FileInfo with getting files from another machine. I believe the infrastructure team said there is a virtual drive set up to the outside server. Am I able to use a virtual path some how? Thanks.

    Read the article

  • Zip files way larger on a Mac using Finder than the 'zip' command.. 2x larger.

    - by user33947
    I have a directory of JPEG's. Each one is roughly 90k, as reported by Photoshop when saving, and also reported by the command line function 'ls'.. When I get the properties for the file with Finder, it's double that, over 220k. Zipping it with finder will also package this bulk as well. Doing the "zip -v test.zip ./dir" command will make a MUCH smaller zip file. Zipping the files on windows also results in a much smaller file size as well, roughly the same to that of the unix zip command. File sizes are also reported correctly on windows. I can't find any mention of this anywhere, so I'm asking here.

    Read the article

  • Use a media player in Linux just to play files from an iPod device (no sync, no manage, just play)?

    - by Somebody still uses you MS-DOS
    I have an ipod classic 160gb, that I sync with my machine at home. I use Linux at work, and want to just plug my ipod and just listen to the tracks, with all the playlists and such. I don't want to sync nothing, I just want to listen to the tracks as if I was using the ipod itself. Why? Because this way I can use the usb port. So, I don't want to manage my ipod in Linux, I just want to listen to the tracks on it in Linux, like it was a local library but it's instead in my ipod. (I've tried gtkpod, it works to show my files, but I can't play, shuffle, etc. It would be interesting to have a complete audio software to handle everything like it was a local library)

    Read the article

  • Creating an HTML file by combining multiple PHP files via the command line?

    - by FishOrDie
    Is it possible to combine multiple PHP files via the command line and create an HTML file? For example, this will save the rendered version of a single PHP file as HTML: php /path/to/my/file/filename.php > /path/to/my/file/test.html I need it to combine multiple files, but I can't seem to get it it to work. Ideally, it would be something like this: php /path/to/my/file/filename.php + /path/to/my/file/filename2.php + /path/to/my/file/filename3.php > /path/to/my/file/test.html Is this possible? If so, how?

    Read the article

  • How to override browser default download behavior for files?

    - by moha297
    Lots of times we have to download files from the net. In IE we get to see the ugly download progress bar. In firefox we get to see a pop-up window opening etc. However, I have never seen this being over ridden in any manner. Until recently on the site *thesixtyone DOT com* If we get to download a song free and click on the ok link to start the download we get a pop up to select location in the default style of windows. Then we see the progress bar as shown below. Any ideas on this? I am trying to see how these guys did this. you can see the image http://highwaves.files.wordpress.com/2010/04/61-download-bar.jpg

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >