Search Results

Search found 40229 results on 1610 pages for 'deleted files'.

Page 87/1610 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Identifying .doc/.docx files that contain images

    - by rev
    I'm moving my notes to evernote. To this end I need to convert .doc/.docx files to rtf. The reason for this is that I have a script to import rtf into evernote. However, some of my .doc/.docx files contain images. Is there any way to identify which .doc/.docx files contain images without viewing them all? I have thousands. This way I can simply open the few that have images and copy/paste the entire content straight into evernote. Should say that I'm using OS X 10.6.8.

    Read the article

  • How to remove large number of files/folders in linux

    - by user1745713
    We are using hadoop to split a table into smaller files to feed to mahout, but in the process, we created a huge amount of _temporary logs. we have an nfs mount for the hadoop volume so we can use all the linux commands to delete folders files, but we just can't get them to be deleted, here's what I've tried so far: hadoop fs -rmr /.../_temporary : hangs for hours and does nothing on nfs mount: rmr -rf /.../_temporary :hangs for hours and does nothing find . -name '*.*' -type f -delete : same as above the folders look like this (38 of these folders inside _temporary): drwxr-xr-x 319324 user user 319322 Oct 24 12:12 _attempt_201310221525_0404_r_000000_0 the content of these are actually folders, not files. each one of those 319322 folders has exactly one file inside. not sure why the do the logging this way. Any help is appreciated.

    Read the article

  • How to strip logfile with grep to remove various dispensable information?

    - by NES
    My logfile has the following format: Dec 26 13:11:48 192.168.1.1 kernel: ACCEPT IN=br0 OUT=vlan1 SRC=192.168.1.2 DST=74.125.43.147 LEN=44 TOS=0x00 PREC=0x00 TTL=63 ID=9312 DF PROTO=TCP SPT=11733 DPT=80 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B4) Now i'm trying to remove some dispensable information to make the output better readable and well arranged and put it into a new file. The result should look like this and should only have the following information: Dec 26 13:11:48 192.168.1.2 74.125.43.147 TCP SPT=11733 DPT=80 How to do it?

    Read the article

  • How to write to a file and, while the file is still being writen, read and parse its contents using

    - by Isabelle
    Hello. I'm actually trying to write a shell script that logs the output of a command to a file but, since the command takes a long time to complete (about 15 minutes), I would like to start parsing the output of the command (content of the file) before the command is completed, so I can send messages to the standard output (the user), like: 10% complete 45% complete and so on. Program steps Redirect command to a file: $(command) $FILE Start reading and parsing the output ($FILE) before the command is finished. I thought of using pararell programming, but I havent't got the hang of it. Any help you be appreciated. Best regards.

    Read the article

  • Copy large files to multiple machines on a LAN

    - by Jonathan Callen
    I have a few large files that I need to copy from one Linux machine to about 20 other Linux machines, all on the same LAN as quickly as is feasible. What tools/methods would be best for copying these files, noting that this is not going to be a one-time copy. These machines will never be connected to the Internet, and security is not an issue. Update: The reason for my asking this is because (as I understand it) we are currently using scp in serial to copy the files to each of the machines and I have been informed that this is "too slow" and a faster alternative is being sought. According to what I have been told, attempting to parallelize the scp calls simply slows it down further due to hard drive seeks.

    Read the article

  • mounted smb share throu fstab, gets read only on added files

    - by Jocke
    I mounted my nas in ubuntu 12.10 and it works with read/write, but when I'm adding a file or directory that file gets read only permissions. My fstab mount looks like this: //192.168.0.12/share/ /media/nas cifs credentials=/home/jocke/.smbcredentials,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0 If I mount the smb share manualy through the GUI it works, but not through fstab. What I am doing wrong?

    Read the article

  • What would be the best way to correlate logs and events on several hosts?

    - by user220746
    I'm trying to build a log correlation system on multiple hosts. SEC seems interesting but I don't know if it will cover my needs. How could I correlate system events, logs, network events, etc. on multiple hosts at the same time, in real time? Examples: If 5 failed logins happened on host A the last minute and if firewall B has denied lots of access on differents ports on A, then we assume there is a potential attack in progress on A. If the Apache service on host A didn't receive any request for the last N minutes and Apache service on host B did, then the load balancing could be faulty.

    Read the article

  • Windows Boot disc to save files?

    - by acidzombie24
    Somehow after updating my legit windows 7 OS with no pirated or mod software on my PC i was welcome to a black screen. I popped in my ghost disc and copied the files i need to an external HD. IIRC windows 7 disc can do that too. Problem with the way i did it on ghost was it excepted me to select 1 file (an HD disc image) so i couldnt select multiple folders to move. Also when i did move i had no idea if it finished or how long it would take. My linux live cd couldnt access the HD. Anyways, is there a disc i can use to easily copy files from my laptop to my external HD? I think ghost, windows 7 and windows server all allow me but is there one that is better suited to copy files?

    Read the article

  • Getting SEC to only monitor latest version of a log file?

    - by user439407
    I have been tasked with running SEC to help correlate PHP logs. The basic setup is pretty straightforward, the problem I'm having is that we want to monitor a log file whose name contains the date(php-2012-10-01.log for instance). How can I tell SEC to only monitor the latest version of the file(and of course switch to the newest log file every day at midnight) I could do something like create a latest version of the file that links to the latest version and run a cron job at midnight to update the link, but I am looking for a more elegant solution

    Read the article

  • SteamCMD can't add files to my home directory

    - by Angle O'Saxon
    I'm trying to clean up the administration of some game servers I run on a Ubuntu box, part of which has been finally setting permissions properly so I can run the Steam console tool that controls updates and such. I had been running it as root using sudo, but I changed the permissions so that I can start it as a regular user rather than root. That bit seems to work fine, but now when SteamCMD actually starts, it errors with the following output ./steam.sh: line 24: /home/angleosaxon/.steampid: Permission denied Installing breakpad exception handler for appid(steam)/version(1334262703) SteamUpdater: Error: Couldn't create directory /home/angleosaxon/Steam/package, got error 13 [ 0%] Download complete. [----] Verifying installation... unlinked 0 orphaned pipes [----] !!! Fatal Error: Steam failed to load: *SteamStartEngine(0xbfa7cfa0) failed with error 1: Failed to open logfile /home/angleosaxon/Steam/steam.log Leaving aside the question of why it wants to add this information to my home directory, why is it getting access denied errors? As I understand it, since it's being run by my account, it operates with my permissions, so it should be able to read/write from my home directory, shouldn't it? This is the command I'm using to run it: /opt/steamcmd/steam.sh "+login UserAccount \"This is not my actual password.\"" +force_install_dir $ServerDir "+app_update 215360 validate" +quit

    Read the article

  • windows - batch moving files to another folder/directory

    - by jdamae
    I am getting an error message to the effect of unable to move files to a single file. I am not trying to do this. What I am trying to do is move files from one folder to another folder (staging) and then deleting the original folder. If you can show me a better way to do this since I am not doing this correctly. Thank you. Here is my .cmd file: Y: move "Y:\ABC_files\*.js" "C:\Documents and Settings\user\Desktop\ABC_Stage\ABC_files\" move "Y:\ABC_files\*.css" "C:\Documents and Settings\user\Desktop\ABC_Stage\ABC_files\" move "Y:\ABC_files\*.png" "C:\Documents and Settings\user\Desktop\ABC_Stage\ABC_files\" move "Y:\ABC_files\*.htm" "C:\Documents and Settings\user\Desktop\ABC_Stage\ABC_files\" move "Y:\ABC_files\*.gif" "C:\Documents and Settings\user\Desktop\ABC_Stage\ABC_files\" move "Y:\ABC.htm "C:\Documents and Settings\user\Desktop\ABC_Stage\" rmdir "Y:\ABC_files" C:\"Program Files"\"App X"\App-IDE.exe -r ABC4.run

    Read the article

  • Ownership/permissions of uploaded files

    - by Cudos
    Hello. I want to find out if I am on the right track. My script uploads files to the directory "images". The directory has this setup: owner/group = www-data Permissions = 700 Questions: Is this a good way to secure the directory from a hacker uploading files? Will the hacker be able to upload the files directly to the directory? Note: I have a bunch of other security measures in my upload script + an .htaccess script in the directory that disables script Execution. I just what to know if the permissions on the directory is sensible. I run apache 2.2

    Read the article

  • Processing files from a Content Distribution Network problem

    - by Derek
    From what I understand that CDNs are meant to physically cache your static files in multiple regions closer to your users. However, I've noticed a few websites that when a page is requested from their server, they grab the asset files from their cdn, process them (compress, minify, etc.) cache the results on their server and then send them to the user requesting the page. This doesn't make too much sense to me. Wouldn't processing the files on your server eliminate the gains from using a cdn? Is this a normal way of doing things, or am I not understanding the whole asset management concept?

    Read the article

  • How do I comperes and split files with terminal

    - by Levan
    So what I wanted to do. comperes(7zip) and split 600MB folder into 199MB parts but sadly when I tried to do this task with archive manager it gave me an error, but I know that if I use terminal it will work. I looked this up in askubuntu and found out about this commend 7z a -v5m -mx0 ubuntu.7z I understand that v5m will split it to 5 mb parts and -mx0 means that it will not comperes it but I want it to comperes so what should I write instead of this -mx0 pleas note: I am using .7zip because I most probably will use this file on windows pc thank you in advanced

    Read the article

  • Restoring file properties but not the complete files, from backup

    - by Jon
    While copying data from my old storage on a Linux computer to the new (linux-based) NAS, I accidentially failed with getting the properties (most important: the modify dates) along to the new location. I also continued to use/modify the files at the new location and hence, cannot just copy it all over again. What I would like to do is a diff between files in the old vs. the new storage, and for those being identical, restore the properties from Linux storage to the NAS storage files. Is there a clever way such as a script or a tool to do this? I could either run it on the Linux box or in worst case from a remote Windows computer. Grateful for any suggestions. /Jon

    Read the article

  • Cached css/javascript files on Sun Java System Web Server

    - by Derp
    I'm doing front-end web development in a Solaris 10 / Sun Java System Web Server 7.0U2 environment. I have noticed that changes to static css or javascript files often do not take effect immediately, whereas changes to static html files always do. My best guess is that a default setting in the web server causes it to cache certain file types in order to provide reasonable performance out of the box. I don't have the admin server running--I'll need to edit the config files by hand. What change(s) can I make so that all of my css and javascript edits take effect immediately? Thanks!

    Read the article

  • What software this log file comes from? [closed]

    - by mickula
    From what software comes this logfile? Please specify full name. Internal IP Threshold FlowsDiff 40 flows/s, Diff: 73 flows/s Sum 26.962 flows/300s (89 flows/s), 32.162.000 packets/300s (107.206 packets/s), 1,198 GByte/300s (32 MBit/s) External 87.98.238.221, 26.958 flows/300s (89 flows/s), 32.156.000 packets/300s (107.186 packets/s), 1,198 GByte/300s (32 MBit/s) External 89.230.69.49, 2 flows/300s (0 flows/s), 2.000 packets/300s (6 packets/s), 0,000 GByte/300s (0 MBit/s) External 89.231.190.149, 1 flows/300s (0 flows/s), 3.000 packets/300s (10 packets/s), 0,000 GByte/300s (0 MBit/s) External 89.239.101.20, 1 flows/300s (0 flows/s), 1.000 packets/300s (3 packets/s), 0,000 GByte/300s (0 MBit/s)

    Read the article

  • Repeated requests on our server?

    - by pitty.platsch
    I encountered something strange in the access log of our Apache server which I cannot explain. Requests for webpages that I or my colleagues do from the office's Windows network get repeated by another IP (that we don't know) a couple of seconds later. The user agent repeating our requests is Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2) Has anyone an idea? Update: I've got some more information now. The referrer of the replicate is set to the URL I requested before and it's not the exact same request as the protocol version is changed from 'HTTP/1.1' to 'HTTP/1.0'. The IP is not just one, it's just one of a subnet (80.40.134.*). It's just the first request to a resource that's get repeated, so it seems the "spy" is building up some kind of cache of visited places. The repeater is also picky. I tried randomly URLs with different HTTP status codes and different file patterns. 301s and 200s are redone, 404s not. Image extensions seem to be ignored. While doing my tests I discovered that this behavior seems to be common as I found other clients visiting just after the first requests: 66.249.73.184 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 10952 "-" "Mediapartners-Google" 50.17.125.180 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 41312 "-" "Mozilla/5.0 (compatible; proximic; +http://www.proximic.com/info/spider.php)" I wasn't aware about this practice, so I don't see it that much as a threat anymore. I still want to find out who this is, so any further help is appreciated. I'll try later if this also happens if I query some other server where I have access to the access logs and will update here then.

    Read the article

  • Find Files That Contain Both Words in Notepad++

    - by SethO
    In Notepad++ (v5.9), I want to search for files which contain two words. For example, I would like to find all text files in a directory that have both Alpha and Bravo in the file. They may not be next to each other and they may have multiple occurrences of either. I just want to find the files that have at least one instance of each. Is there a way to structure this search without resorting to Regular Expressions? Thanks for the advice.

    Read the article

  • How to block access to files in the current directory with .htaccess

    - by kfir
    I have a few private files in a public folder and I want to block access to them. For example lets say I have the following files tree: DictA FileA FileA FileB FileC I want to block access to FileB and FileA in the current directory and allow access to the FileA in the DictA directory. The first thing that came to mind was to use the FilesMatch directive as follows: <FilesMatch "^(?:FileA)|(?:FileB)$"> Deny from all </FilesMatch> The problem here is that FileA inside DictA will also be blocked, which is not what I wanted. I could override that by adding another .htaccess file to DictA but I would like to know if there is a solution which wont involve that. P.S: I can't move the private files to a separate folder.

    Read the article

  • Force .js files saved in ANSI encoding to show in UTF-8 on IIS 7.5

    - by Xcarpa
    I'm migrating a web system that now works on windows server 2003 IIS 6, to IIS 7.5 on windows 2008 server This system generates javascript files with accented characters in ANSI (Portuguese - Brazil). These javascripts shows for example alert messages. In IIS 6 I have no problem with that, but now using IIS 7.5 if those files are not in UTF-8, the accented characters do not appear correctly. Do we have any way to force these files, even in ANSI, to be processed by IIS 7.5 as UTF-8 ? Thank you ! Cheers Xcarpa

    Read the article

  • Git doesn't sync files until committed, even if checked out in a different branch

    - by DertWaiter
    Okay, I have git 1.7.11.1 on Windows and I have a local test repository with 2 branches. One is master with index.php and help.php. I then create another branch called slave :) I run from git bash rm help.php and it disappears from the folder, but I don't stage anything. I switch to checkout master branch and it is supposed to restore file help.php because it is not modified in the master branch, isn't it? And it does not do it. When I go back to the slave branch and commit and then switch to checkout master then help.php appears. Is that the way it is supposed to to work? Why?

    Read the article

  • Writing a basic shader for large input files

    - by Zoltan Varadi
    I started writing a shader for my iOS app and instead of starting from scratch i used this tutorial here: http://www.raywenderlich.com/3664/opengl-es-2-0-for-iphone-tutorial I wrote an import function, first to import wavefront .obj models. My problem is that with I can't handle larger inputs (with a simple cube it was working). I realized that the indices array is an array of GLubyte values, which is unsigned char, so as a result i cant have more than 256 indexes. I modified it to GLuint, but then only get a blank screen. What else needs to me modified? p.s.: the source can be downloaded from here: http://d1xzuxjlafny7l.cloudfront.net/downloads/HelloOpenGL.zip

    Read the article

  • Process video (canon) mov files

    - by user613326
    Well i would like to program something to process HDR made by magic lantern a canon add on. That doesnt change the format its just some kind of add on, that can produce HDR video. Its a bit complex to make such videos so i would like to use some math and make it myself and makee the software freeware (as a thanks to the creators of magic lantern). The problem with that HDR that normal converts have a lot of artifacts, and i would like to make something (for free) using some new algorithms. I have made, this works fine on individual images, my ideas work. I would want to do this on that canon 60d video format. Canons mov format, and am so far out of luck to read that out. It must be possible dough as i know in some projects they do it too. I would not like to export a movie first to jpg and then back to video as that requires a lot of disk space, i would like to retrieve individual frames, do my math based multiple frames, and then build a new movie on it. The output video can be of any type, avi or mov again. Does anyone know of a library who can do that ? (read and save), So i could use it in a C# project (i prefer C# above c++, but c++ is an option to program in to for me).

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >