Search Results

Search found 40435 results on 1618 pages for 'organize files and folders'.

Page 39/1618 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • How file permissions are stored in inode?

    - by Debadyuti Maiti
    Suppose there's two pc - "A" and "B". Then if A downloads a files from B , then what would be the file permission of that downloaded file? Is it possible that the downloaded file in A will have an Inode entry with all it's permissions from B & store B's user account as the owner ? If that's the case then is it impossible to change that files permission on A if "others" [as in user-group-others] doesn't have the right to write on that file? e.g. if this is the case , __x __x __x file.txt [On B] then what would be the file permission on A of that same file downloaded from B [e.g. through vsftpd]? __x __x __x file.txt [On A] or rw_x rw_x rw_x file.txt [On A] [i.e. defined by A's default umask value]

    Read the article

  • Where should I store and verify files manipulated by an app

    - by Alan W. Smith
    I'm working on a little Ruby script to move screenshots while renaming them based on a specific convention. I'll be writing tests to confirm the behavior. Ruby has lots of conventions for where to store files (e.g. the "spec" and "features" directories for RSpec and Cucumber, respectively), but I'm not finding best practices for storing files that will be acted upon by the tests. The same goes for a destination for the final copies of the files. So, the question in two parts is: Where should I store files that the test cases will use for a source input. Where should tests that need to write output files send them to.

    Read the article

  • How to add a daemon to a quickly project

    - by darkrex1986
    Currently I'm developing an application with quickly which is divided in two parts: A graphical UI where the user could configure some things, and a daemon which do the most work in the background. I started with the UI, to create some windows for settings and so on. Now I want to start coding the daemon, but I have no clue how to implement the daemon in my quickly project. Could I simply paste the files of the daemon in project folder or is there an implementation method for adding new files to a quickly project? Or do I have to create a new project and merge them together?

    Read the article

  • is it possible to have duplicated folders on /home folder new partition as an error?

    - by ranna
    i move my home directory to it's own partition but i end with what it seems to be a duplicated home folders (there are 2 accounts admin and everyones account) i have "admin and everyones" folders and then i have in hide ".ecryptfs" inside i had again "admin and everyones" folder inside there is .encryptfs and .private inside each account cant read it's content as seems encrypted. which of both folders im able to delete the ones inside .encryptfs or the other showing in unhide mode "admin and everyones"? it seems is dulpicated as have the same file size.

    Read the article

  • How best to use XPath with very large XML files in .NET?

    - by glenatron
    I need to do some processing on fairly large XML files ( large here being potentially upwards of a gigabyte ) in C# including performing some complex xpath queries. The problem I have is that the standard way I would normally do this through the System.XML libraries likes to load the whole file into memory before it does anything with it, which can cause memory problems with files of this size. I don't need to be updating the files at all just reading them and querying the data contained in them. Some of the XPath queries are quite involved and go across several levels of parent-child type relationship - I'm not sure whether this will affect the ability to use a stream reader rather than loading the data into memory as a block. One way I can see of making it work is to perform the simple analysis using a stream-based approach and perhaps wrapping the XPath statements into XSLT transformations that I could run across the files afterward, although it seems a little convoluted. Alternately I know that there are some elements that the XPath queries will not run across, so I guess I could break the document up into a series of smaller fragments based on it's original tree structure, which could perhaps be small enough to process in memory without causing too much havoc. I've tried to explain my objective here so if I'm barking up totally the wrong tree in terms of general approach I'm sure you folks can set me right...

    Read the article

  • Copy folders when copying list items from source to destination

    - by iHeartDucks
    Hi, This is my code to copy files in a list from source to destination. Using the code below I am only able to copy files but not folders. Any ideas on how can I copy the folders and the files within those folders? using (SPSite objSite = new SPSite(URL)) { using (SPWeb objWeb = objSite.OpenWeb()) { SPList objSourceList = null; SPList objDestinationList = null; try { objSourceList = objWeb.Lists["Source"]; } catch(Exception ex) { Console.WriteLine("Error opening source list"); Console.WriteLine(ex.Message); } try { objDestinationList = objWeb.Lists["Destination"]; } catch (Exception ex) { Console.WriteLine("Error opening destination list"); Console.WriteLine(ex.Message); } string ItemURL = string.Empty; if (objSourceList != null && objDestinationList != null) { foreach (SPListItem objSourceItem in objSourceList.Items) { ItemURL = string.Format(@"{0}/Destination/{1}", objDestinationList.ParentWeb.Url, objSourceItem.Name); objSourceItem.CopyTo(ItemURL); objSourceItem.UnlinkFromCopySource(); } } } } Thanks

    Read the article

  • Efficient way to create a large number of SharePoint folders

    - by BeraCim
    Hi all: I'm currently creating a large number of SharePoint folders within a list (e.g. ~800 folders), with each folder containing a different number of items. The way it is currently done is that it programmatically reads off the content types, items, event listeners and the likes off the same folder from another web, then creates the same folder in the current web. That ran reasonably fine and fast on a dev environment. However when it goes to an environment with WFEs and farms, it slowed down a lot. I have checked that there are no leaks in the code, and that the code follows SharePoint coding best practices. At the moment I'm looking at it at the code level. From your experience, are there any efficient ways of creating a large number of SharePoint folders, lists and items? EDIT: I'm currently using SharePoint API, but will be looking at moving to using Web Service in the future. I'm interested in looking at both options though. Code wise, its just the general reading of a folder and its content types plus items and their details, then create the same folder in the same list with the same content types, then copy over the items using patch update. I want to know whether there are more efficient ways of doing the above. Thanks.

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

  • Ignore folders with certain filetypes

    - by gavin19
    I'm trying in vain to rewrite my old Powershell script found here - "$_.extension -eq" not working as intended? - for Python.I have no Python experience or knowledge and my 'script' is a mess but it mostly works. The only thing missing is that I would like to be able to ignore folders which don't contain 'mp3s', or whichever filetype I specify. Here is what I have so far - import os, os.path, fnmatch path = raw_input("Path : ") for filename in os.listdir(path): if os.path.isdir(filename): os.chdir(filename) j = os.path.abspath(os.getcwd()) mp3s = fnmatch.filter(os.listdir(j), '*.txt') if mp3s: target = open("pls.m3u", 'w') for filename in mp3s: target.write(filename) target.write("\n") os.chdir(path) All I would like to be able to do (if possible) is that when the script is looping through the folders that it ignores those which do NOT contain 'mp3s', and removes the 'pls.m3u'. I could only get the script to work properly if I created the 'pls.m3u' by default. The problem is that that creates a lot of empty 'pls.m3u' files in folders which contain only '.jpg' files for example. You get the idea. I'm sure this script is blasphemous to Python users but any help would be greatly appreciated.

    Read the article

  • Move selected e-mails to Personal inbox Folders Files

    - by kasunmit
    Hi, I am creating sub folders inside the inbox folder using following code. private void btnCreate_Click(object sender, EventArgs e) { if (string.IsNullOrEmpty(txtInboxFolderName.Text.Trim().ToString())) { MessageBox.Show("Enter name of the folder", "Null value", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); txtInboxFolderName.Focus(); return; } else { CreateNewInboxFolder(); } } private void CreateNewInboxFolder() { try { string folderName = txtInboxFolderName.Text.ToString(); OutLook._Application olApp = new OutLook.ApplicationClass(); OutLook._NameSpace olNs = olApp.GetNamespace("MAPI"); OutLook.MAPIFolder oInbox = olNs.GetDefaultFolder(OutLook.OlDefaultFolders.olFolderInbox); OutLook.Folders oFolders = oInbox.Folders; OutLook.MAPIFolder oInboxFolders = oFolders.Add(folderName, OutLook.OlDefaultFolders.olFolderInbox); MessageBox.Show("Folder " + oInboxFolders.Name + " is Created", "Create inbox folder", MessageBoxButtons.OK, MessageBoxIcon.Information); } catch (Exception) { MessageBox.Show("Folder name: " + txtInboxFolderName.Text +" is already created" + "\nPlease enter different folder name", "Error", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); txtInboxFolderName.Focus(); } } From the above code I can create sub folder inside the Inbox folder. Then what I need to do is save the e-mail into that created folder. Please help me to do it. I think we can import those folder names to a separate form and select one from there, then I can save e-mails (e-mail datails is in the data grid and I need to save them to new folder) Please help me with the code. I am very confused and I'm not sure how to do it.

    Read the article

  • How can I use Perl to determine whether the contents of two files are identical?

    - by Zaid
    This question comes from a need to ensure that changes I've made to code doesn't affect the values it outputs to text file. Ideally, I'd roll a sub to take in two filenames and return 1or return 0 depending on whether the contents are identical or not, whitespaces and all. Given that text-processing is Perl's forté, it should be quite easy to compare two files and determine whether they are identical or not (code below untested). use strict; use warnings; sub files_match { my ( $fileA, $fileB ) = @_; open my $file1, '<', $fileA; open my $file2, '<', $fileB; while (my $lineA = <$file1>) { next if $lineA eq <$file2>; return 0 and last; } return 1; } The only way I can think of (sans CPAN modules) is to open the two files in question, and read them in line-by-line until a difference is found. If no difference is found, the files must be identical. But this approach is limited and clumsy. What if the total lines differ in the two files? Should I open and close to determine line count, then re-open to scan the texts? Yuck. I don't see anything in perlfaq5 relating to this. I want to stay away from modules unless they come with the core Perl 5.6.1 distribution.

    Read the article

  • How to organize Python modules for PyPI to support 2.x and 3.x

    - by Craig McQueen
    I have a Python module that I would like to upload to PyPI. So far, it is working for Python 2.x. It shouldn't be too hard to write a version for 3.x now. But, after following guidelines for making modules in these places: Distributing Python Modules The Hitchhiker’s Guide to Packaging it's not clear to me how to support multiple source distributions for different versions of Python, and it's not clear if/how PyPI could support it. I envisage I would have separate code for: 2.x 2.6 (maybe, as a special case to use the new buffer API) 3.x How is it possible to set up a Python module in PyPI so that someone can do: easy_install modulename and it will install the right thing whether the user is using 2.x or 3.x?

    Read the article

  • How to organize Python modules for PyPI to support 2.x and 3.x

    - by Craig McQueen
    I have a Python module that I would like to upload to PyPI. So far, it is working for Python 2.x. It shouldn't be too hard to write a version for 3.x now. But, after following guidelines for making modules in these places: Distributing Python Modules The Hitchhiker’s Guide to Packaging it's not clear to me how to support multiple source distributions for different versions of Python, and it's not clear if/how PyPI could support it. I envisage I would have separate code for: 2.x 2.6 (maybe, as a special case to use the new buffer API) 3.x How is it possible to set up a Python module in PyPI so that someone can do: easy_install modulename and it will install the right thing whether the user is using 2.x or 3.x?

    Read the article

  • How to organize modules for PyPI to support 2.x and 3.x

    - by Craig McQueen
    I have a Python module that I would like to upload to PyPI. So far, it is working for Python 2.x. It shouldn't be too hard to write a version for 3.x now. But, after following guidelines for making modules in these places: Distributing Python Modules The Hitchhiker’s Guide to Packaging it's not clear to me how to support multiple source distributions for different versions of Python, and it's not clear if/how PyPI could support it. I envisage I would have separate code for: 2.x 2.6 (maybe, as a special case to use the new buffer API) 3.x How is it possible to set up a Python module in PyPI so that someone can do: easy_install modulename and it will install the right thing whether the user is using 2.x or 3.x?

    Read the article

  • Security permissions for remote shared folders

    - by Tomas Lycken
    I have two servers running Windows Server 2003, and I want copy files from one server (A), programmatically with a windows service running under the Local System account, to a shared folder on the other (B). I keep getting "access denied" errors, and I can't figure out what security settings I need to set to open the shared folder for writing. This is what I've done on the recieving end: On A, right-click on the folder to share, choose the tab "Sharing" and select "Share this folder". Set a share name. Click "Permissions", add the group "Everyone" and give it full control. I tried choosing the "Security" tab to give some permissions there as well, but the "Add" dialog only finds local users, despite the fact that B shows up in the "Workgroup computers" dialog. After further inspection, this is the case also for the "Permissions" dialog under the "Sharing" tab (are they the same?).

    Read the article

  • How do I correctly organize output into columns?

    - by wrongusername
    The first thing that comes to my mind is to do a bunch of \t's, but that would cause words to be misaligned if any word is longer than any other word by a few characters. For example, I would like to have something like: Name Last Name Middle initial Bob Jones M Joe ReallyLongLastName T Instead, by including only "\t"'s in my cout statement I can only manage to get Name Last Name Middle initial Bob Jones M Joe ReallyLongLastName T or Name Last Name Middle initial Bob Jones M Joe ReallyLongLastName T What else would I need to do?

    Read the article

  • How can I copy files with names containing spaces and UNICODE, when using a shell script?

    - by LOlliffe
    I have a list of files that I'm trying to copy and move (using cp and mv) in a bash shell script. The problem that I'm running into, is that I can't get either command to recognize a huge number of files, seemingly because the filenames contain spaces and/or unicode characters. I couldn't find any switches to decode/re-encode these characters. Instead, for example, if I copy "file name.xml", I get "*.xml" and a script error that the file wasn't found for my result. Does anyone know settings or commands that will deal with these files?

    Read the article

  • Iterating through folders and files in batch file?

    - by Will Marcouiller
    Here's my situation. A project has as objective to migrate some attachments to another system. These attachments will be located to a parent folder, let's say "Folder 0" (see this question's diagram for better understanding), and they will be zipped/compressed. I want my batch script to be called like so: BatchScript.bat "c:\temp\usd\Folder 0" I'm using 7za.exe as the command line extraction tool. What I want my batch script to do is to iterate through the "Folder 0"'s subfolders, and extract all of the containing ZIP files into their respective folder. It is obligatory that the files extracted are in the same folder as their respective ZIP files. So, files contained in "File 1.zip" are needed in "Folder 1" and so forth. I have read about the FOR...DO command on Windows XP Professional Product Documentation - Using Batch Files. Here's my script: @ECHO OFF FOR /D %folder IN (%%rootFolderCmdLnParam) DO FOR %zippedFile IN (*.zip) DO 7za.exe e %zippedFile I guess that I would also need to change the actual directory before calling 7za.exe e %zippedFile for file extraction, but I can't figure out how in this batch file (through I know how in command line, and even if I know it is the same instruction "cd"). Anyone's help is gratefully appreciated.

    Read the article

  • Setup.exe files downloading without cab files over poor connections

    - by Colin
    We have customers who are trying to download a setup.exe file over mobile connections that appear to be very slow. They have reported that when they click on the downloaded setup.exe, the install wizard starts up, but part way through the wizard they get an error message indicating that a cab file is corrupt or missing. They couriered a problem tablet to us, and we downloaded the file without a problem but I could replicate the problem by using https to download the file (https is normally used to access the rest of the site, although it is not necessary for the download). When I did this the downloaded file was 2.8MB. It should be 8MB. I don't think that https is the root cause of the problem because I can see the download link in the browser history using http, so I know the customer tried to download using http. I think that the issue is that the poor connection is preventing a complete download, but the browser is acting as if it is complete. Is there a way to ensure the file is downloaded fully, or not at all? Why does the browser not indicate that the download is incomplete?

    Read the article

  • large amount of data in many text files - how to process?

    - by Stephen
    Hi, I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing/averaging + additional transformations) over observations/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to 1) write the whole thing in C (or Fortran) 2) import the files (tables) into a relational database directly and then pull off chunks in R or Python (some of the transformations are not amenable for pure SQL solutions) 3) write the whole thing in Python Would (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks

    Read the article

  • remuxing mpv files from h264 AVI files

    - by crankharder
    I have a bunch of, I think, x264 encoded AVIs that I'd like to convert to m4v so that I can play with Quicktime. Here's how I created them First I dump the vob from DVD with this: $ mplayer -dumpstream -dumpfile new.vob dvd://1 Then I compress it: $ mencoder -oac copy -o new.avi -ovc x264 -x264encopts crf=18 new.vob I tried doing this to convertthem to m4v, but it's blowing up... I tried dumping the h264/acc streams: $ mplayer new.avi -dumpvideo -dumpfile new.h264 $ mplayer new.avi -dumpaudio -dumpfile new.acc And remuxing(?) with MP4Box but I'm getting an error: $ MP4Box -add new.h264#video -add new.aac#audio new.m4v Cannot find H264 start code Error importing new.h264#video: BitStream Not Compliant So not sure what to do now...

    Read the article

  • Vbscript - Compare and copy files from folder if newer than destination files

    - by Kenny Bones
    Hi, I'm trying to design this script that's supposed to be used as a part of a logon script for alot of users. And this script is basically supposed to take a source folder and destination folder as basically just make sure that the destination folder has the exact same content as the source folder. But only copy if the datemodified stamp of the source file is newer than the destination file. I have been thinking out this basic pseudo code, just trying to make sure this is valid and solid basically. Dim strSourceFolder, strDestFolder strSourceFolder = "C:\Users\User\SourceFolder\" strDestFolder = "C:\Users\User\DestFolder\" For each file in StrSourceFolder ReplaceIfNewer (file, strDestFolder) Next Sub ReplaceIfNewer (SourceFile, DestFolder) Dim DateModifiedSourceFile, DateModifiedDestFile DateModifiedSourceFile = SourceFile.DateModified() DateModifiedDestFile = DestFolder & "\" & SourceFile.DateModified() If DateModifiedSourceFile < DateModifiedDestFile Copy SourceFile to SourceFolder End if End Sub Would this work? I'm not quite sure how it can be done, but I could probably spend all day figuring it out. But the people here are generally so amazingly smart that with your help it would take alot less time :)

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >