Java is not my main programming language so I might be asking the obvious.
But is there a simple file-handling library in Java, like in python?
For example I just want to say:
File f = Open('file.txt', 'w')
for(String line:f){
//do something with the line from file
}
Thanks!
Currently I'm using the following as a bookmark in Firefox 3.6.3. It redirects me to the RFC just fine, but the active tab says [object Window]. What do I need to do to get rid of that artifact?
javascript:var rfc=prompt("RFC Number");window.open("http://ietf.org/rfc/rfc" + rfc + ".txt")
Not sure if it's possible but how do I read a resource from a url using javascript without ajax?
for example, the following url is a static text file containing json encoded text
http://mysite.s3.amazonaws.com/jsonencodedcontent.txt
I'd like to use javascript to read the content from above link, read the json content into a javascript variable.
I can't use ajax because of cross site and I have no control over amazon S3 domain.
anyway to achieve this?
I have a folder full of files and i want to search some string inside them. The issue is that some files may be zip,exe,ogg,etc.
Can i check somehow what kind of file is it so i only open and search through txt, php, etc files.
I can't rely on the file extension.
What is the best approach to creating a simple multithread safe logging class? Is something like this sufficient?
public class Logging
{
public Logging()
{
}
public void WriteToLog(string message)
{
object locker = new object();
lock(locker)
{
StreamWriter SW;
SW=File.AppendText("Data\\Log.txt");
SW.WriteLine(message);
SW.Close();
}
}
}
The answer in this post http://stackoverflow.com/questions/2119680/use-jquery-to-check-if-a-url-on-another-domain-is-404-or-not shows how to use YQL in Jquery to check if URL is valid or not. However, I can't get this to work for me. The only difference I can think of is that my URL is a text file (http://mycrossdomain.com/sometext.txt) and not HTML.I think the YQL query needs to be adjusted accordingly. Any help is appreciated.
Hi
I am reading a file using File.ReadAllText("filename.txt"). This file is located in very same folder where the .exe file is located (c:/program files/installation folder). But windows app looks into System32 folder for this file by default.
**I don't want to hard code the path of the file.
I've some html code generated in javascript like this
cell.innerHTML = '<a href="#" class="sortheader" id="sortheader_'+i+'" '+
'onclick="ts_resortTable(this, '+i+');return false;">' +
txt+'<span class="sortarrow"></span></a>';
I'd like to call the function ts_resortTable() but independently of the onclick event how can i generate the "this" parameter of the function?
I tried the DOM selector : $('sortheader_'+i) in jQuery and the getElementById('#sortheader_'+i) as well but it's not working
I would like to have direct access to the text inside a textbox on another form, so I added a public variable _txt to a form and added an event like so:
private void richTextBox1_TextChanged(object sender, EventArgs e)
{
_txt = richTextBox1.Text;
}
But the form is loaded like this:
public FrmTextChild(string text)
{
InitializeComponent();
_txt = text;
richTextBox1.Text = _txt;
Text = "Untitled.txt";
}
Is there a better way to directly link the two?
Hi,
Is there a way to get the length of a collection from this?
ParallelQuery<string> Lines = File.ReadAllLines("Topics.txt").AsParallel<string>();
This has no length property. There is a count method but it takes a Func. If I don't pass a Func paremeter, I could get all the properties in the collection, but how could I not pass one in?
Thanks
Yes it's Windows sorry.
I'm using mysqldump with the option -T which creates a sql and atxt file per table.
mysqldump -u user -ppass db -T path
I use that option to be able to restore easily one table.
Now I'd like to restore all the tables.
mysql -u user -ppass db < path/*.sql
Obvously doesn't work
Also, I don't know where do my funcs/procs go.
Thx
So I have a Linux program that runs in a while(true) loop, which waits for user input, process it and print result to stdout.
I want to write a shell script that open this program, feed it lines from atxt file, one line at a time and save the program output for each line to a file.
So I want to know if there is any command for:
- open a program
- send text to a process
- receive output from that program
Many thanks.
When I type in the foll. code, I get the output as 1073741823.
#include <iostream>
#include <vector>
using namespace std;
int main()
{
vector <int> v;
cout<<v.max_size();
return 0;
}
However when I try to resize the vector to 1,000,000,000, by v.resize(1000000000); the program stops executing. How can I enable the program to allocate the required memory, when it seems that it should be able to?
I am using MinGW in Windows 7. I have 2 GB RAM. Should it not be possible?
In case it is not possible, can't I declare it as an array of integers and get away? BUt even that doesn't work.
Another thing is that, suppose I would use a file(which can easily handle so much data ).
How can I let it read and write and the same time.
Using fstream file("file.txt', ios::out | ios::in ); doesn't create a file, in the first place. But supposing the file exists, I am unable to use to do reading and writing simultaneously.
WHat I mean is this :
Let the contents of the file be 111111
Then if I run : -
#include <fstream>
#include <iostream>
using namespace std;
int main()
{
fstream file("file.txt",ios:in|ios::out);
char x;
while( file>>x)
{
file<<'0';
}
return 0;
}
Shouldn't the file's contents now be 101010 ? Read one character and then overwrite the next one with 0 ? Or incase the entire contents were read at once into some buffer, should there not be atleast one 0 in the file ? 1111110 ?
But the contents remain unaltered. Please explain.
Thank you.
I'm running the following code to check for updates in my software, and I wonder whether VB.Net will automatically user computer proxy settings:
Dim CurrentVersion As String = (New System.Net.WebClient).DownloadString("URL/version.txt")
If not, how can I adapt it to use proxy settings?
I want to use dup2 to read from input file and redirect it to the input of exec function. but my problem is i have three running process all of them have to open same input file but they do different jobs. what your suggest in such case? i don't know if it is possible to use "cat data.txt" to feed the input for the three other process but i don't know the way to do that.
I have a problem logging onto a page and then using it with cURL.
I login, get PHPSESSID and cookie, and then try to do an action but page returns 'not logged in'.
But if I manually log in and copy/paste that PHPSESSID into curl cookies .txt file, everything works fine. So why doesn't it work with PHPSESSID from cURL?
How can i check (with SELinux) access to the file by process name?
For examle: we have 2 process:
* /usr/bin/foo1
* /usr/bin/foo2
They are run under account with username userA and
try to open for modify file:
/home/userA/test.txt
I want that if foo1 try to open file - it's ok. But if foo2 try to open
this file - i have message about this in /var/log
Problem is that both processes have the same user ID. And i cant use RBAC by username.
Is it a good practice to set stream references to null after closing them? Would this release resources in any way?
Example:
BufferedReader input= new BufferedReader(new FileReader("myfile.txt"));
// code
input.close();
input = null;
// possible more code
Not real information:
$ ssh-keygen -t rsa -C "[email protected]"
Generating public/private rsa key pair.
Enter file in which to save the key (/c/Users/Tekkub/.ssh/id_rsa):
ssh.txt
I entered a file name here. Not sure if i should have,
Enter passphrase (empty for no passphrase):
I am stuck here. I type and it doesnt work
Let's say I need to create a new file whose path is ".\a\bb\file.txt". The folder a and bb may not exist. How can I create this file in C# in which folder a and bb are automatically created if not exist?
I'm currently trying to perform a deep crawl within a small list of sites. To accomplish this, I updated conf/domain-urlfilter.txt with the domains of the sites I wish to scrape, which worked nicely. However, I found that not only were the links crawled at every step filtered, but the outlinks captured from each page crawled were filtered as well.
Is there a way to avoid filtering captured outlinks while still filtering crawled URLs?