Search Results

Search found 4783 results on 192 pages for 'a txt'.

Page 13/192 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • How to convert html textfield/area data to server-side txt file? [closed]

    - by olijake
    How can I make a script that will convert the text/data in a html textfield/textarea and send it to the server, which then saves it as a .txt file for storage? NOTE: I am hosting a website(for testing purposes) using Apache 2.2 on a Windows 7 machine. I downloaded PHP version 5.4.7, but have not yet installed on my server yet (not sure if I will need it, but also not sure how to install it). 1st problem: Saving text to server Html page/section with title textfield, text textarea, and submit button. You would enter a title, the text/notes you need in the textfield, then press the submit button to have it store the text in the textarea, as a .txt file on the server called .txt. 2nd problem: Opening text from server Html with list of all txt files OR textfield for entering in title, then submit button to send the title of the requested .txt file to the server, which would then load it up on the page. Here is what I have so far: (let me know if there is something that I should change or if something just isn't correct in the index.html code I have right now.) <!DOCTYPE HTML> <html> <head> <title>Insert Title</title> <meta http-equiv="Content-Type" content="Text/HTML; charset=UTF-8"/> </head> <body> <form method="post" action="save.INSERT_FILETYPE" name="textfile" enctype="multipart/form-data"> <input type="text" name="title"><br/> <textarea rows="20" cols="100" id="text" name="text"></textarea><br/> <input type="submit" name="submit" value="Submit Text to Server"> </form><br/> <hr style="width: 100%; height: 4px;"><br/> <form method="post" action="open.INSERT_FILETYPE" name="textfile" enctype="multipart/form-data"> <input type="text" name="title"><br/> <input type="submit" name="submit" value="Submit Txt File Request"> </form><br/> <div>Opened text file displays here or goes on another page</div> </body> </html> I plan on using a server side language/script, but ANYTHING that gets the job done is fine. I already tried looking into using some ASP/jScript/PHP, but have had some trouble implementing it into my server. (ie: getting the modules loaded and telling the server what file types to parse.) I know this may be an extremely easy fix, but then in that case, hopefully you wouldn't mind helping me out a little :). If it turns out that this is MUCH more complicated than I expect, then feel free to let me know that, so I don't waste me time running in circles. I appreciate any help/assistance that you can provide, Thanks, Jake EDIT: Wrong Apache version. In response to the comments/closing of this thread: My question: "How exactly do I install the PHP module on the apache server? and is this even possible? and is this even recommended?" ^ In case I wasn't clear enough already To Clarify: I understand the basics of PHP, I just have trouble with INSTALLING PHP on the apache server. (I have used PHP before, but never successfully on apache (so far...)) For my script I wrote something similar to this already (using fopen() and a few other commands): <?php fopen("notes.txt", "r"); file_put_contents("notes.txt",teststring1); ?> I have used javascript for this task before also (although I prefer using PHP and server-side languages): <script language="javascript"> function WriteToFile(){ var fso = new ActiveXObject("Scripting.FileSystemObject"); var s = fso.CreateTextFile("C:\\NewFile.txt", true); var text=document.getElementById("TextArea1").innerText; s.WriteLine(text); s.WriteLine('***********************'); s.Close(); } </script>

    Read the article

  • Scripts help FIND command via atime output to multiple files

    - by sswagner
    here is a script I have wrote that I need help with. in the script I do a find for any file that has not been access for over 30 days, 60, 90, 180, 270 & 365 days. This works just fine. however, this takes a few days just to finish the 30 day portion. it is scanning a NAS. (millions and millions of files) as you see, the 30 day information really holds all the data need for the rest of the scripts. the 60, 90, etc. portion of the script are just redoing the same effort as the 30 day portion, except for an extended time frame. it would save in this case weeks worth of re-scanning if some how the 60, 90 180, etc.. portions could just get its data from the 30 day output. this is where I am asking for help. the output is just like an ls -l command. and you can also see from the output below, there are multiple years in this output. the script is attached and printed below. total 24 -rw-r--r-- 1 root bin 60 Apr 12 13:07 config_file -rw-r--r-- 1 root bin 9 Apr 12 13:07 config_file.InProgress -rw-r--r-- 1 root bin 0 Apr 12 13:07 config_file.sids -rw-r--r-- 1 root bin 1284 Apr 19 10:41 rpt_file -rw-r--r-- 1 16074 5003 20083 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/console/dat1_01.gif -rw-r--r-- 1 16074 5003 20088 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/console/set1_04.gif -rw-r--r-- 1 16074 5003 2008 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/oapps/get2_03.htm -rw-r--r-- 1 16074 5003 20083 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/oapps/per1_01.gif any help is appreciated. these are linux distro boxes, so I am sure perl is on there too if needed.. Thanks! !/bin/ksh # search shares for files that have not been accessed for a certain time. NOTE: $IN = input search $OUT = output directory for text file # TESTS Numeric arguments can be specified as # +n for greater than n, -n for less than n, n for exactly n. # -atime n File was last accessed n*24 hours ago. # # IN1=/nas/quota/slot_2/CR* IN2=/nas/quota/slot_3/CR* IN3=/nas/quota/slot_4/CR* IN4=/nas/quota/slot_5/CR* OUT=/nas/quota/slot_3/CR_PRJ144/steve mkdir ${OUT} for dir in ${IN1}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN2}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN3}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN4}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN1}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN2}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN3}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN4}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN1}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN2}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN3}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN4}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN1}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN2}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN3}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN4}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN1}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN2}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN3}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN4}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN1}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done for dir in ${IN2}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done for dir in ${IN3}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done for dir in ${IN4}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done

    Read the article

  • Exclude a string from wildcard search in a shell

    - by steigers
    Hello everybody I am trying to exclude a certain string from a file search. Suppose I have a list of files: file_Michael.txt, file_Thomas.txt, file_Anne.txt. I want to be able and write something like ls *<and not Thomas>.txt to give me file_Michael.txt and file_Anne.txt, but not file_Thomas.txt. The reverse is easy: ls *Thomas.txt Doing it with a single character is also easy: ls *[^s].txt But how to do it with a string? Sebastian

    Read the article

  • Web bot in C++/PHP.

    - by John
    Hello, I've recently started learning PHP, but I have a wide knowledge on C++. I've been wondering how to make a web bot and now, I would greatly like to make one. I won't be using this robot for spamming or anything, just as a test of what PHP/C++ can do online. I was wondering how I could go about doing this and if you have any articles/tutorials that would be helpful. Thanks, John

    Read the article

  • Rate My Script: Finding Flash Files Embedded in Office Files

    - by Shaun Johnson
    Can anyone improve on this? Requires Sysinternals Strings date /T >N:\output.txt net use z: /delete net use z: \\svr-002\rmstudentwork @cd /d "z:\" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.xls | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.ppt | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.doc | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.xlsx | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.pptx | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.docx | findstr \.swf >> "N:\output.txt" date /T >>N:\output.txt net use z: /delete /yes >>N:\output.txt net use z: \\svr-003\rmstudentwork "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.xls | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.ppt | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.doc | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.xlsx | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.pptx | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.docx | findstr \.swf >> "N:\output.txt" net use z: /delete /yes Basically it mounts a share as a network drive then runs through the share looking for swf files inside office documents.

    Read the article

  • Is there a generic class to write structured Text Files?

    - by Burnsys
    I have several projects that need to write structured Textfiles, some with fixed size fields, other delimited by characters. Is there a .net class that could be used for that? I know there is a "Microsoft.VisualBasic.FileIO.TextFieldParser" that is useful for reading textfiles, i am actually searching for a ""Microsoft.VisualBasic.FileIO.TextFieldWriter" Related: http://stackoverflow.com/questions/34182/reading-text-files-using-net

    Read the article

  • Git. Checkout feature branch between merge commits

    - by mageslayer
    Hi all It's kind weird, but I can't fulfill a pretty common operation with git. Basically what I want is to checkout a feature branch, not using it's head but using SHA id. This SHA points between merges from master branch. The problem is that all I get is just master branch without a commits from feature branch. Currently I'm trying to fix a regression introduced earlier in master branch. Just to be more descriptive, I crafted a small bash script to recreate a problem repository: #!/bin/bash rm -rf ./.git git init echo "test1" > test1.txt git add test1.txt git commit -m "test1" -a git checkout -b patches master echo "test2" > test2.txt git add test2.txt git commit -m "test2" -a git checkout master echo "test3" > test3.txt git add test3.txt git commit -m "test3" -a echo "test4" > test4.txt git add test4.txt git commit -m "test4" -a echo "test5" > test5.txt git add test5.txt git commit -m "test5" -a git checkout patches git merge master #Now how to get a branch having all commits from patches + test3.txt + test4.txt - test5.txt ??? Basically all I want is just to checkout branch "patches" with files 1-4, but not including test5.txt. Doing: git checkout [sha_where_test4.txt_entered] ... just gives a branch with test1,test3,test4, but excluding test2.txt Thanks.

    Read the article

  • improve my python program to fetch the desire rows by using if condition

    - by user2560507
    unique.txt file contains: 2 columns with columns separated by tab. total.txt file contains: 3 columns each column separated by tab. I take each row from unique.txt file and find that in total.txt file. If present then extract entire row from total.txt and save it in new output file. ###Total.txt column a column b column c interaction1 mitochondria_205000_225000 mitochondria_195000_215000 interaction2 mitochondria_345000_365000 mitochondria_335000_355000 interaction3 mitochondria_345000_365000 mitochondria_5000_25000 interaction4 chloroplast_115000_128207 chloroplast_35000_55000 interaction5 chloroplast_115000_128207 chloroplast_15000_35000 interaction15 2_10515000_10535000 2_10505000_10525000 ###Unique.txt column a column b mitochondria_205000_225000 mitochondria_195000_215000 mitochondria_345000_365000 mitochondria_335000_355000 mitochondria_345000_365000 mitochondria_5000_25000 chloroplast_115000_128207 chloroplast_35000_55000 chloroplast_115000_128207 chloroplast_15000_35000 mitochondria_185000_205000 mitochondria_25000_45000 2_16595000_16615000 2_16585000_16605000 4_2785000_2805000 4_2775000_2795000 4_11395000_11415000 4_11385000_11405000 4_2875000_2895000 4_2865000_2885000 4_13745000_13765000 4_13735000_13755000 My program: file=open('total.txt') file2 = open('unique.txt') all_content=file.readlines() all_content2=file2.readlines() store_id_lines = [] ff = open('match.dat', 'w') for i in range(len(all_content)): line=all_content[i].split('\t') seq=line[1]+'\t'+line[2] for j in range(len(all_content2)): if all_content2[j]==seq: ff.write(seq) break Problem: but istide of giving desire output (values of those 1st column that fulfile the if condition). i nead somthing like if jth of unique.txt == ith of total.txt then write ith row of total.txt into new file.

    Read the article

  • can i add textfields in the code other than in the init?

    - by themanepalli
    Im a 15 year old noob to java. I am trying to make a basic program trader that asks for the stock price, the stock name, the stock market value and the type of order. Based on the type of order, i want a new textfield to appear. do i have to add the textfield in the init first or can i do it in the action performed. I googled someother ones but they are a little too complicated for me. heres my code. import java.awt.*; import java.applet.*; // import an extra class for the ActionListener import java.awt.event.*; public class mathFair extends Applet implements ActionListener { TextField stockPrice2; TextField stockName2; TextField orderType2; TextField marketValue2; TextField buyOrder2; TextField sellOrder2; TextField limitOrder2; TextField stopLossOrder2; Label stockPrice1; Label stockName1; Label orderType1; Label marketValue1; Label buyOrder1; Label sellOrder1; Label limitOrder1; Label stopLossOrder1; Button calculate; public void init() { stockPrice1 = new Label ("Enter Stock Price:"); stockName1 = new Label ("Enter Name of Stock: "); orderType1 = new Label ("Enter Type of Order: 1 for Buy, 2 for Sell, 3 for Stop Loss, 4 for Limit"); marketValue1= new Label("Enter The Current Price Of The Market"); stopLossOrder1 = new Label ("Enter The Lowest Price The Stock Can Go"); limitOrder1 = new Label ("Enter The Highest Price The Stock Can Go"); stockPrice2 = new TextField (35); stockName2 = new TextField (35); orderType2 = new TextField (35); marketValue2= new TextField(35); calculate= new Button("Start The Simulation"); add (stockPrice1); add (stockPrice2); add (stockName1); add (stockName2); add (marketValue1); ; add(marketValue2); add (orderType1); add (orderType2); add(calculate); ; calculate.addActionListener(this); } public void actionPerformed (ActionEvent e) { String stock= stockPrice2.getText(); int stockPrice= Integer.parseInt(stock); stockPrice2.setText(stockPrice +""); String marketV= marketValue2.getText(); int marketValue= Integer.parseInt(marketV); marketValue2.setText(marketValue+""); String orderT= orderType2.getText(); int orderType= Integer.parseInt(orderT); orderType2.setText(orderType+""); if((e.getSource()==calculate)&& (orderType==1)) { buyOrder2= new TextField(35); buyOrder1 = new Label("Enter Price You Would Like To Buy At"); add(buyOrder2); add(buyOrder1); } else if((e.getSource()==calculate)&& (orderType==2)) { sellOrder2= new TextField(35); sellOrder1 = new Label("Enter Price You Would Like To Sell At"); add(sellOrder2); add(sellOrder1); } else if((e.getSource()==calculate)&& (orderType==3)) { stopLossOrder2= new TextField(35); stopLossOrder1=new Label("Enter The Lowest Price The Stock Can Go"); add(stopLossOrder2); add(stopLossOrder1); } else if((e.getSource()==calculate)&& (orderType==4)) { limitOrder2=new TextField(35); limitOrder1= new Label("Enter the Highest Price The Stock Can Go"); add(limitOrder2); add(limitOrder1);; } } }

    Read the article

  • Can I tell sitecrawlers to visit a certain page?

    - by Ace
    Hi there! I have this drupal website that revolves around a document database. By design you can only find these documents by searching the site. But I want all the results to be indexed by Googlebot and other crawlers, so I was thinking, what if I make a page that lists all the documents, and then tell the robots to visit the page to index all my documents..? Is this possible, or is there a better way to do it?

    Read the article

  • script to list user's mapped drive not giving results or error

    - by user223631
    We are in the process of migrating two file servers to a new server. We have mapped drives via user group in group policy. Many users have manually mapped drives and we need to find these mappings. I have created a PowerShell script to run that remotely get the drive mappings. It works on most computers but there are many that are not returning results and I am not getting any error messages. Each workstation on the list creates a text file and the ones that are not returning results have no text in the files. I can ping these machines. If the machine is not turned on, it does come up error message that the RPC server is not available. My domain user account is in a group that is in the local admin account. I have no idea why some are not working. Here is the script. # Load list into variable, which will become an array of strings If( !(Test-Path C:\Scripts)) { New-Item C:\Scripts -ItemType directory } If( !(Test-Path C:\Scripts\Computers)) { New-Item C:\Scripts\Computers -ItemType directory } If( !(Test-Path C:\Scripts\Workstations.txt)) { "No Workstations found. Please enter a list of Workstations under Workstation.txt"; Return} If( !(Test-Path C:\Scripts\KnownMaps.txt)) { "No Mapping to check against. Please enter a list of Known Mappings under KnownMaps.txt"; Return} $computerlist = Get-Content C:\Scripts\Workstations.txt # Loop through each item in the array (each computer in the list of computers we loaded into the variable) ForEach ($computer in $computerlist) { $diskObject = Get-WmiObject Win32_MappedLogicalDisk -computerName $computer | Select Name,ProviderName | Out-File C:\Tester\Computers\$computer.txt -width 200 } Select-String -Path C:\Tester\Computers\*.txt -Pattern cmsfiles | Out-File C:\Tester\Drivemaps-all.txt $strings = Get-Content C:\Tester\KnownMaps.txt Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -notmatch -simplematch | Out-File C:\Tester\Drivemaps-nonmatch.txt -Width 200 Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -simplematch | Out-File C:\Tester\Drivemaps-match.txt -Width 200

    Read the article

  • SGE: invoking qmake raises "critical error: can't resolve group"

    - by Pierre
    I'm new to SGE an I'm trying to run qmake with the simple following Makefile with our very new cluster: merge.txt: job1.txt job2.txt job3.txt ... cat $^ > $@ job1.txt: sleep 1 echo "Hello From " $@ > $@ sleep 1 job2.txt: sleep 2 echo "Hello From " $@ > $@ sleep 2 job3.txt: (...) the following command raises an error: qmake -l arch=lx24-amd64 -cwd -v PATH -- -j 4 sleep 1 dynamic mode sleep 2 dynamic mode sleep 3 dynamic mode sleep 4 dynamic mode critical error: can't resolve group qmake: *** [job3.txt] Error 1 critical error: can't resolve group qmake: *** [job2.txt] Error 1 critical error: can't resolve group qmake: *** [job1.txt] Error 1 critical error: can't resolve group what's wrong ?

    Read the article

  • Unix list absolute file name

    - by Matthew Adams
    Given an arbitrary single argument representing a file (or directory, device, etc), how do I get the absolute path of the argument? I've seen many answers to this question involving find/ls/stat/readlink and $PWD, but none that suits my need. It looks like the closest answer is ksh's "whence" command, but I need it to work in sh/bash. Assume a file, foo.txt, is located in my home directory, /Users/matthew/foo.txt. I need the following behavior, despite what my current working directory is (I'm calling the command "abs"): (PWD is ~) $ abs foo.txt /Users/matthew/foo.txt $ abs ~/foo.txt /Users/matthew/foo.txt $ abs ./foo.txt /Users/matthew/foo.txt $ abs /Users/matthew/foo.txt /Users/matthew/foo.txt What would "abs" really be? TIA, Matthew

    Read the article

  • Hard link not works under MacOS in GUI mode

    - by AntonAL
    Hi, i faced a little strange behavior, while using hard links. From terminal, i create a text file 1.txt and a hard link "to this file" nano 1.txt mkdir dir ln 1.txt ./dir/ I check the resulting hard link and see that its contents are the same as of the original file. less ./dir/1.txt I change the initial file ... nano 1.txt ... and see, that changes was reflected in hard-link less ./dir/1.txt I change content of hard-link (more correct, of course - file, being referenced with hard-link) ... nano ./dir/1.txt ... and see, that changes are reflected in initial file less 1.txt Until now, all going well... Now, I close terminal and start playing with created files (1.txt and ./dir/1.txt) from Finder. When i change on this two files with TextEdit, changes are not reflected in another file. Just like the hard link was teared off... What is going on here ?

    Read the article

  • joining text files with 600M+ lines

    - by dnkb
    I have two files huge.txt and small.txt. Huge has around 600M rows and it's 14Gigs, each line has four space separated words (tokens) and finally another space separated column with a number. Small has 150K rows with a size of ~3M, a space separated word and a number. Both Files are sorted using the sort command, with no extra options. The words in both files may include apostrophes (') and dashes (-). The deisred output would contain all columns from the huge.txt and the second column (the number) from small txt where the first word of huge.txt and the first word of small.txt match. My attemtpts below failed miserably with the following error: cat huge.txt|join -o 1.1 1.2 1.3 1.4 2.2 - small.txt > output.txt join: memory exhausted What I suspect is that the sorting order isn't right somehow even though the files are pre-sorted using: sort -k1 huge.unsorted.txt > huge.txt sort -k1 small.unsorted.txt > small.txt Problems seem to appear around words that have apostrophes (') or dashes (-). I also tried dictinoary sorting using the -d option bumping into the same error at the end. I see two ways out of this but don't know how to implement any of them. 1) Any tips how to sort the files in a way that the join command considers them to be sorted properly? 2) I was thinking of calculating MD5 or some other hashes of the strings to get rid of the apostrophes and dashes, and do the sorting and joining with the hashes instead of the strings themselves an dat the "translate" back the hashes to strings, but leave the numbers intact at the end of the lines. Since there would be only 150K hashes it's not that bad. What would be a good way to calculate individual hashes for each of the strings? Some AWK magic? See file samples at the end. Thank you! sample of huge.txt had stirred me to 46 had stirred my corruption 57 had stirred old emotions 55 had stirred something in 69 had stirred something within 40 sample of small.txt caley 114881 calf 2757974 calfed 137861 calfee 71143 calflora 154624 calfskin 148347 calgary 9416465 calgon's 94846

    Read the article

  • Where Not In OR Except simulation of SQL in LINQ to Objects

    - by Thinking
    Suppose I have two lists that holds the list of source file names and destination file names respectively. The Sourcefilenamelist has files as 1.txt, 2.txt,3.txt, 4.txt while the Destinaitonlist has 1.txt,2.txt. I ned to write a linq query to find out which files are in SourceList that are absent in DestinationFile list. e.g. here the out put will be 3.txt and 4.txt. I have done this by a foreach statement. but now I want to do the same by using LINQ(C#). Edit: My Code is List<FileList> sourceFileNames = new List<FileList>(); sourceFileNames.Add(new FileList { fileNames = "1.txt" }); sourceFileNames.Add(new FileList { fileNames = "2.txt" }); sourceFileNames.Add(new FileList { fileNames = "3.txt" }); sourceFileNames.Add(new FileList { fileNames = "4.txt" }); List<FileList> destinationFileNames = new List<FileList>(); destinationFileNames.Add(new FileList { fileNames = "1.txt" }); destinationFileNames.Add(new FileList { fileNames = "2.txt" }); IEnumerable<FileList> except = sourceFileNames.Except(destinationFileNames); And Filelist is a simple class with only one property fileNames of type string.

    Read the article

  • Windows Azure - Automatic Load Balancing - partitioning

    - by veda
    I was going through some videos. I found that Windows Azure will group the blobs into partitions based on the partition key and will Automatically Load Balance these partitions on their servers. The partition key for a blob is blob name. Using the blob name, azure will automatically do partitions. Now, My question is that Can I able to make the azure to do partitions based on the Container Name. I wanted my partition key to be container name. For example, I have a storage account. In that I have 2 containers named container1 and container2. In container1, I have 1000 files named 1.txt, 2.txt, 3.txt, ......., 501.txt, 502.txt, ..... 999.txt, 1000.txt and in container2, I have another 1000 files named 1001.txt, 1002.txt, 1003.txt, ......., 1501.txt, 1502.txt, ..... 1999.txt, 2000.txt Now, Will Windows Azure will generate 2000 partitions based on the blob name and serve me through several servers??? Won't it be better if Azure partitions based on the Container name? container1 on one server and conatiner2 on another.

    Read the article

  • String Index Out Of Bound Exception error

    - by Fd Fehfhd
    Im not really sure why a am getting this error. But here is my code it is meant to test palindromes disregarding punctuation. So here is my code import java.util.Scanner; public class PalindromeTester { public static void main(String [] args) { Scanner kb = new Scanner(System.in); String txt = ""; int left; int right; int cntr = 0; do { System.out.println("Enter a word, phrase, or sentence (blank line to stop):"); txt = kb.nextLine(); txt = txt.toLowerCase(); char yP; String noP = ""; for (int i = 0; i < txt.length(); i++) { yP = txt.charAt(i); if (Character.isLetterOrDigit(txt.charAt(yP))) { noP += yP; } } txt = noP; left = 0; right = txt.length() -1; while (txt.charAt(left) == txt.charAt(right) && right > left) { left++; right--; } if (left > right) { System.out.println("Palindrome"); cntr++; } else { System.out.println("Not a palindrome"); } } while (!txt.equals("")); System.out.println("You found " + cntr + " palindromes. Thank you for using palindromeTester."); } } And if i test it and then i put enter so it will tell me how many palindromes you found the error i am getting is javav.lang.StringIndexOutOfBoundException : String index out of range 0 at PalindromeTester.main(PalindromeTester.java:38) and line 28 is while (txt.charAt(left) == txt.charAt(right) && right > left) Thanks for the help in advance

    Read the article

  • Best way to convert log files (*.txt) to web-friendly files (*.html, *.jsp, etc)?

    - by prometheus
    I have a bunch of log files which are pure text. Here is an example of one... Overall Failures Log SW Failures - 03.09.2010 - /logs/swfailures.txt - 23 errors - 24 warnings HW Failures - 03.09.2010 - /logs/hwfailures.txt - 42 errors - 25 warnings SW Failures - 03.10.2010 - /logs/swfailures.txt - 32 errors - 27 warnings HW Failures - 03.10.2010 - /logs/hwfailures.txt - 11 errors - 31 warnings These files can get quite large and contain a lot of other information. I'd like to produce an HTML file from this log that will add links to key portions and allow the user to open up other log files as a result... SW Failures - 03.09.2010 - <a href="/logs/swfailures.txt">/logs/swfailures.txt</a> - 23 errors - 24 warnings This is greatly simplified as I would like to add many more links and other html elements. My question is -- what is the best way to do this? If the files are large, should I generate the html before serving it to the user or will jsp do? Should I use perl or other scripting languages to do this? What are your thoughts and experiences?

    Read the article

  • Symantec NetBackup restore - Incremental backup

    - by w0051977
    We are using Net Backup as a corporate solution. Incremental backups are taken daily during the week and then a weekly backup is done at the weekend (Saturday). My colleague has restored a folder to how it stood at 14:00 on a Tuesday. The problem is that the restore is taking files from the weekend backup if they did not exist at that point in time of the restore. For example, the folder we are restoring should look like this (this is how it looked on Tuesday at 14:00): Folder1 (folder name) Test.txt Test1.txt Test2.txt The folder looked like this at the weekend when the full restore was done (even though it did exist at the weekend when the full backup ran): Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt The actual folder restored looks like this: Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt Test3.txt should not be restored because it did not exist at the point of the restore. Is there a setting somewhere that we are missing. The folder in question is 200GB - the example above is for simplification. I realise this is a basic question.

    Read the article

  • Replace html element with data in javascript

    - by Ultimate
    I trying to auto increment the serial number when row increases and automatically readjust the numbering order when a row gets deleted in javascript. For that I am using the following clone method for doing my task. Rest of the thing is working correct except its not increasing the srno because its creating the clone of it. Following is my code for this: function addCloneRow(obj) { if(obj) { var tBody = obj.parentNode.parentNode.parentNode; var trTable = tBody.getElementsByTagName("tr")[1]; var trClone = trTable.cloneNode(true); if(trClone) { var txt = trClone.getElementsByTagName("input"); var srno = trClone.getElementsByTagName("span"); var dd = trClone.getElementsByTagName("select"); text = tBody.getElementsByTagName("tr").length; alert(text) //here i am getting the srno in increasing order //I tried something like following but not working //var ele = srno.replace(document.createElement("h1"), srno); //alert(ele); for(var i=0; i<dd.length; i++) { dd[i].options[0].selected=true; var nm = dd[i].name; var nNm = nm.substring((nm.indexOf("_")+1),nm.indexOf("[")); dd[i].name = nNm+"[]"; } for(var j=0; j<txt.length; j++) { var nm = txt[j].name; var nNm = nm.substring((nm.indexOf("_")+1),nm.indexOf("[")); txt[j].name = nNm+"[]"; if(txt[j].type == "hidden"){ txt[j].value = "0"; }else if(txt[j].type == "text") { txt[j].value = ""; }else if(txt[j].type == "checkbox") { txt[j].checked = false; } } for(var j=0; j<txt.length; j++) { var nm = txt[j].name; var nNm = nm.substring((nm.indexOf("_")+1),nm.indexOf("[")); txt[j].name = nNm+"[]"; if(txt[j].type == "hidden"){ txt[j].value = "0"; }else if(txt[j].type == "text") { txt[j].value = ""; }else if(txt[j].type == "checkbox") { txt[j].checked = false; } } tBody.insertBefore(trClone,tBody.childNodes[1]); } } } Following is my html : <table id="step_details" style="display:none;"> <tr> <th width="5">#</th> <th width="45%">Step details</th> <th>Expected Results</th> <th width="25">Execution</th> <th><img src="gui/themes/default/images/ico_add.gif" onclick="addCloneRow(this);"/></th> </tr> <tr> <td><span>1</span></td> <td><textArea name="step_details[]"></textArea></td> <td><textArea name="expected_results[]"></textArea></td> <td><select onchange="content_modified = true" name="exec_type[]"> <option selected="selected" value="1" label="Manual">Manual</option> <option value="2" label="Automated">Automated</option> </select> </td> <td><img src="gui/themes/default/images/ico_del.gif" onclick="removeCloneRow(this);"/></td> </tr> </table> I want to change the srno. of span element dynamically after increment and decrement on it. Need help thanks

    Read the article

  • How to deny the web access to some files?

    - by Strae
    I need to do an operation a bit strange. First, i run on Debian, apache2 (which 'runs' as user www-data) So, I have simple text file with .txt ot .ini, or whatever extension, doesnt matter. These files are located in subfolders with a structure like this: www.example.com/folder1/car/foobar.txt www.example.com/folder1/cycle/foobar.txt www.example.com/folder1/fish/foobar.txt www.example.com/folder1/fruit/foobar.txt therefore, the file name always the same, ditto for the 'hierarchy', just change the name of the folder: /folder-name-static/folder-name-dinamyc/file-name-static.txt What I should do is (I think) relatively simple: I must be able to read that file by programs on the server (python, php for example), but if I try to retrieve the file contents by broswer (digiting the url www.example.com/folder1/car/foobar.txt, or via cUrl, etc..) I must get a forbidden error, or whatever, but not access the file. It would also be nice that even accessing those files via FTP are 'hidden', or anyway couldnt be downloaded (at least that I use with the ftp root and user data) How can I do? I found this online, be put in the file .htaccess: <Files File.txt> Order allow, deny Deny from all </ Files> It seems to work, but only if the file is in the web root (www.example.com / myfile.txt), and not in subfolders. Moreover, the folders in the second level (www.example.com/folder1/fruit/foobar.txt) will be dinamycally created.. I would like to avoid having to change .htaccess file from time to time. It is possible to create a rule, something like that, that goes for all files with given name, which is on www.example.com/folder-name-static/folder-name-dinamyc/file-name-static.txt, where those parts are allways the same, just that one change ? EDIT: As Dave Drager said, i could semplify this keeping those file outside the web accessible directory. But those directory's will contain others files too, images, and stuff used by my users, so i'm simply try to not have a duplicate folders system, like: /var/www/vhosts/example.com/httpdocs/folder1/car/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/cycle/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/fish/[other folders and files here] //and, then for the 'secrets' files: /folder1/data/car/foobar.txt /folder1/data/cycle/foobar.txt /folder1/data/fish/foobar.txt

    Read the article

  • How to add DNS txt record in cpanel and what to name it?

    - by Lars Holdgaard
    I have a domain, where I have to add a DNS text change. More specifically, I have to do the following: "You should now create a DNS text record with the meta tag value shown below for the domain you're securing." The value I should insert is this one: globalsign-domain-verification=list_of_random_chars How do I add this in cPanel? I thought about doing it this way, but I have to add a name: I also thought about adding it like this: So my question really is: how do I add this txt file in a correct way?

    Read the article

  • When creating a new text file, should I add a .txt extension to its name?

    - by Agmenor
    When I create a new document aimed at containing only plain text, I am not obliged by Ubuntu to add a .txt extension to its name. It works indeed very well: gedit opens it without problem, understanding very well that it is only text. The only two pro arguments I have found from now on for adding an extension are 1/ interoperability with Windows systems and 2/ avoiding confusion with folders having the same name. Nevertheless those two arguments do not convince me at all. As a consequence, should I keep the reflex of adding an extension to files or not?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >