Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 490/1620 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • file reading in python

    - by Jagdev
    So my whole problem is that I have two files one with following format(for Python 2.6): #comments config = { #comments 'name': 'hello', 'see?': 'world':'ABC',CLASS=3 } This file has number of sections like this. Second file has format: [23] [config] 'name'='abc' 'see?'= [23] Now the requirement is that I need to compare both files and generate file as: #comments config = { #comments 'name': 'abc', 'see?': 'world':'ABC',CLASS=3 } So the result file will contain the values from the first file, unless the value for same attribute is there in second file, which will overwrite the value. Now my problem is how to manipulate these files using Python. Thanks in advance and for your previous answers in short time ,I need to use python 2.6

    Read the article

  • cygwin slow file open

    - by Erdem
    My application uses fopen to open a lot of files. While in linux opening and reading thousand of files doesn't even take a second; in cygwin it takes more than 5 seconds. I think it is because path conversion functions in cygwin dlls. 'open' function is a bit faster. If I use -mno-cygwin it becomes very fast but I can't use it. Is there an easy way to make cygwin dlls just open files; without any linux-windows conversion?

    Read the article

  • Linux: file recovery (Urgent) [closed]

    - by Ashine
    Hi Firends, I desperately need some help regarding problem I am facing now. While creating a softlink for a very important file I gave the reverse command by mistake. Instead of giving it "ln target linkname" I have given it 'ln linkname target'. This has resulted in references pointing to target files are now pointing to links and the actual refernces to target files are lost. How can I recover the files back. "/home/user/data1" was original file location. "/home/user/db2" was the desired softlink for this data. I haveto give "ln data1 db2" but I have given 'ln db2 data1'. This has resulted in 'data1' being now pointing towards 'db2' and the actual data in 'data1' can not be retrieved. Some one please help. Thanks in advance.

    Read the article

  • emacs: force ido-mode to forget history...

    - by Stephen
    Hi, I wonder if I can keep ido from not remembering my history and only show completions for files that are in the current directory when I am searching for a file. I understand that this history feature is useful at times, but I often end up editing the incorrect file because I think I am editing file called 'abc.txt' in the current directory but in fact I am editing the file by the same name in another one that I previously visited (often happens when there is not an 'abc.txt' in the current directory, as I mistakenly assume). From reading the ido.el file I thought to set in my .emacs file (also evaluated these expressions in running emacs instance): (custom-set-variables '(ido-enable-last-directory-history nil) '(ido-record-commands nil) ) and deleted a file called .ido.last in ~/, but still it remembers some previous files I've visited before making these changes. How can I purge my previous history, and I am not entirely sure what the difference between the two variables above are but seems to have done the trick to keep ido from remembering files I visit in the future? Thanks for your help!

    Read the article

  • Automate Excel Text Import Wizard?

    - by Dave Mackey
    I receive files on occasion in a fixed width format. I need to import them into Excel but Excel doesn't perfectly pick up the columns. I can do it manually each time with the Text Import Wizard, but I'm wondering if there is a way to create a "text import template" or something similar - since these files are always the same format.

    Read the article

  • nginx, php-cgi and "No input file specified."

    - by Stephen Belanger
    I'm trying to get nginx to play nice with php-cgi, but it's not quite working how I'd like. I'm using some set variables to allow for dynamic host names--basically anything.local. I know that stuff is working because I can access static files properly, however php files don't work. I get the standard "No input file specified." error which normally occurs when the file doesn't exist, but it definitely does exist and the path is correct because I can access the static files in the same path. It could possibly be a permissions thing, but I'm not sure how that could be an issue. I'm running this on Windows under my own user account, so I think it should have permission unless php-cgi is running under a different user without me telling it to. . Here's my config; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; server { # Listen for HTTP listen 80; # Match to local host names. server_name *.local; # We need to store a "cleaned" host. set $no_www $host; set $no_local $host; # Strip out www. if ($host ~* www\.(.*)) { set $no_www $1; rewrite ^(.*)$ $scheme://$no_www$1 permanent; } # Strip local for directory names. if ($no_www ~* (.*)\.local) { set $no_local $1; } # Define default path handler. location / { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; index index.php index.html index.htm; # Route non-existent paths through Kohana system router. try_files $uri $uri/ /index.php?kohana_uri=$request_uri; } # pass PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; } # Prevent access to system files. location ~ /\. { return 404; } location ~* ^/(modules|application|system) { return 404; } } }

    Read the article

  • SAXException: Unexpected end of file after null

    - by itsadok
    I'm getting the error in the title occasionally from a process the parses lots of XML files. The files themselves seem OK, and running the process again on the same files that generated the error works just fine. The exception occurs on a call to XMLReader.parse(InputStream is) Could this be a bug in the parser (I use piccolo)? Or is it something about how I open the file stream? No multithreading is involved. Piccolo seemed like a good idea at the time, but I don't really have a good excuse for using it. I will to try to switch to the default SAX parser and see if that helps. Update: It didn't help, and I found that Piccolo is considerably faster for some of the workloads, so I went back.

    Read the article

  • bat file using winrar taking too long to run

    - by Jessie
    hi guys, i have this scripts which extracts all my folder's and files from my c:\projects locations and put its in winrar and transfers them to c:\backup\project for /f "delims==" %%D in ('DIR C:\projects /A /B /S') do ( "C:\Program Files\WinRAR\WinRAR.EXE" m -r "c:\backup\projects.rar" "%%D" ) i have also tried the below script which uses the same source c:\projects but put them in their own separate winrar folder like in the source then transfers the folders into my c:\backup. FOR /F "DELIMS==" %%D in ('DIR C:\projects /AD /B') DO ( "C:\Program Files\WinRAR\WinRAR.EXE" m -r "C:\Backup\%%D.rar" "%%D" ) my question is, my second scripts only takes two hours to run when my first script takes over 24 hours to run, is there any way to make my first script faster? if anything shouldn't my first script be faster?

    Read the article

  • What Source Control?

    - by Hein du Plessis
    I desperately need source control to manage projects between more than one developer. A long time ago I used Visual Source Safe and it worked quite well. Can anybody recommend a free substitute? I have the following basic requirements: I need to host the repository on my own server. I do not want extra clutter within my source files, like CVS does. I need proper check in / check out, so that nobody can change a module until I've checked it back in. I don't want / need source code merging / branching. We use Delphi for web development, so many html files, images, sql files, etc. Any recommendations?

    Read the article

  • asp.net: deploying local resources - is embedding possible?

    - by chris
    I have an asp.net app with some local resources. These resources are used in the aspx and code-behind files: aspx: <asp:TextBox ID="TextBox1" runat="server" Text="<%$ Resources:testTag %>" /> .vb: TextBox1.Text = GetLocalResourceObject("testTag").ToString If I deploy the .resx files with the app, there are no problems. However, if I change the build action on the resx file to "Embedded Resource", the resources aren't available, even though they're in the DLL that gets built. Is it possible to deploy resources in a DLL, or am I stuck with managing & deploying resx files on the server?

    Read the article

  • Backing Up vs. Redundancy

    - by TK Kocheran
    I'm currently in stage 2 of 3 of building my home workstation. What this means is that my RAID-0 array of solid state disks will be backed up nightly to a RAID-5 or RAID-6 array of traditional spinning hard disks. However, it recently dawned on me that redundancy is not backup. The main reason for setting up a RAID array with redundancy was to protect myself in the event of a drive failure to serve as an effective backup solution. Wait. What if a bolt of lightning finds a way to travel into my house, through my surge-protector, into my power supply and physically destroys all of my hard disks and SSDs? Well, in that case, I guess I'd be fine because I generally keep most important files (music, pictures, videos) stored in multiple places like on my laptop, my wife's laptop, and an encrypted USB hard drive. Wait. What if a giant hedgehog meteor attacks my house from space traveling at mach 3 and all machines and hard disks are blown to smithereens. Well, I guess I could find a way to do ridiculously slow and cumbersome rsyncs or backups to Amazon's Glacier. Wait. What if there's a nuclear apocalypse... and at this point I start laughing hysterically. At what point does backing up become irrelevant? I completely understand situation one (mechanical drive failure), situation two (workstation compromised or destroyed somehow), possibly even situation three (all machines and disks destroyed), but situation four? There's no questioning the need for backups. None. However, there are three questions I'd really like addressed: To what level should one backup? I definitely understand the merits of physical disk redundancy. I also believe in keeping important files on multiple machines and thinning out the possibility of losing all of my files. Online backups make sense, but they beg the following question. What should I be backing up remotely and how often? It's no problem storage-wise to back up important files (music, pictures, videos) and even configuration and temporal data for all of the machines in my network (all Linux based)... albeit locally. Transferring to the cloud is another story. Worst-case scenario, if I lost all of my configuration for my individual computers, the reality is that I probably lost the machines too. The cloud is a long way away from here; I can run backups over CAT-6 here and see 100MB/s easily, but I'm afraid that I'm only going to see 2MB/s at best when transferring up to the cloud.

    Read the article

  • git status: how to ignore some changes

    - by Mr Fooz
    Is there a way to have git status ignore certain changes within a file? Background I have some files in my repository that are auto-generated (yes, I know that's typically not recommended, but I have no power to change this). Whenever I build my tree, these auto-generated files have status information updated in them (who generated them, a timestamp, etc.). When I say git status, I'd like it to run a filter on these generated files that strips out this transient status information. I only want it to show up in the "Changed but not updated:" section of git's output if there are other, real changes. Using the .gitattributes approach found at http://progit.org/book/ch7-2.html, I am able to get git diff to ignore these status line changes using a simple egrep filter. I'd like to get git status to also use textconv filters (or something equivalent). I'd prefer it if merges aren't affected by any of this filtering.

    Read the article

  • What is a vim "runtime directory"?

    - by Andres Jaan Tack
    I'm trying to get started with things like FuzzyFinder, but I am stuck at the point where it says: INSTALLATION fuf-installation Put all files into your runtime directory. If you have the zip file, extract it to your runtime directory. You should place the files as follows: your_runtime_directory/plugin/fuf.vim ... What is my "runtime directory"? How do I know if I have one? Why does it matter how I put things into it?

    Read the article

  • Create javadoc with multiple src dirs

    - by Ed Marty
    I have a Util package with source files in three seperate directories, defined like so: src/com/domain/util src/Standard/com/domain/util src/Extended/com/domain/util The package is built with the first set of files and either one of the second or third set, to create a total of two different implementations of the same interface. Now, I want to generate javadoc based on those files. How can I specify that? What I really want to do is javadoc com.domain.util -sourcepath ./src;./src/Standard to build the javadoc for the standard util package, and javadoc com.domain.util -sourcepath ./src;./src/Extended to build the javadoc for the extended util package. This doesn't work. The only way I've found so far to actually make it work is to merge the directory structure of the common classes and the Standard classes into another location and run with that for the standard javadoc, then do the same for the Extended package. Is there another way?

    Read the article

  • Windows Vista file permissions does not inherit when copying to a network share

    - by vdboor
    I've got a network share with specific permissions on a subfolder (e.g. access to developers and freelancers). A designer copied PNG files from his local system to the network share. These files didn't inherit the folder permissions, but only gave access to Administrators. Is this a setting somewhere to restrict access, and can it be avoided? The local system uses Vista, the server uses Windows 2003.

    Read the article

  • How does NFS read cache work on Debian?

    - by Ztyx
    I am planning to use NFS to serve out many small files. They will be read very often so client side caching is crucial. Does NFS handle this? Is there a way to increase the client side caching in some way? ...or should I look at another solution? Syncing using rsync or unison periodically is not an option since the files are modified on the client side from time to time.

    Read the article

  • Uninstaller that can work with failed installs

    - by Nifle
    I'd like an application that can uninstall applications that does not show up in the normal add/remove applications. I have an application that failed to install properly. It installs half way and then crashes leaving short cuts, folders with exe and config files in and some stuff in the registry. Ideally I would like to be able to point the uninstaller to the install dir and tell it to remove everything that references files in that dir.

    Read the article

  • Multithread http downloader with webui [closed]

    - by kiler129
    I looking for software similar to JDownloader or PyLoad. JD is pretty good but use heavy Java and for now have very weak web interface. PyLoad is awesome, include simple but powerful web-UI but downloading 10 files (10 threads each, so summary it's 100 connections running at around 8MB/s all) consume a lot of cpu - it's whole core for me. Do you know any lightweight alternatives? Aria2c is good for console but I failed to find any good webui, official one is good but after adding more files almost crashes Chrome :)

    Read the article

  • Programatically Create Controller in Rails

    - by Trey Bean
    What's the best way to dynamically create a controller in Rails. I've got a class that needs to generate a bunch of controller that inherit from it. I could just create a bunch of files in /app/controllers, but they'd all be basically empty files. There's got to be a way to generate these classes dynamically and have them treated like other controllers in Rails, e.g. reloaded correctly in dev mode. I tried putting this in a config/initializer: FL.contact_types.each do |contact_type| controller_name = "#{contact_type.pluralize}Controller" Object.const_set(controller_name.to_sym, Class.new(ContactsController)) unless Object.const_defined?(controller_name.to_sym) end This worked, but I run into the dependency/reload problem and get “A copy of AuthenticatedSystem has been removed from the module tree but is still active” since the ContactsController inherits from ApplicationController which includes AuthenticatedSystem. Is creating a bunch of empty files really the best solution?

    Read the article

  • Downloading a Directory Tree with FTPLIB

    - by Anthony Lemmer
    I'd like to download a directory and all of its contents to the local HD. Here's the code I have thus far (crashes if there's a sub-directory, else grabs all the files): import ftplib import configparser import os def runBackups(): #Load INI filename = 'connections.ini' config = configparser.SafeConfigParser() config.read(filename) connections = config.sections() i = 0 while i < len(connections): #Load Settings uri = config.get(connections[i], "uri") username = config.get(connections[i], "username") password = config.get(connections[i], "password") backupPath = config.get(connections[i], "backuppath") archiveTo = config.get(connections[i], "archiveto") #Start Back-ups ftp = ftplib.FTP(uri) ftp.login(username, password) ftp.set_debuglevel(2) ftp.cwd(backupPath) files = ftp.nlst() for filename in files: ftp.retrbinary('RETR %s' % filename, open(os.path.join(archiveTo, filename), 'wb').write) ftp.quit() i += 1 print() print("Back-ups complete.") print()

    Read the article

  • problem with nitro pdf

    - by Nrew
    Im converting lots of .doc files into .pdf using nitro pdf express.The program isn't finish converting the 37 files yet maybe 10 are converted and are in the output directory already. But when I cancelled it, even the converted one's are deleted. Can I still find it somewhere or are they gone for good.

    Read the article

  • remotely running find -exec options

    - by Michael Merchant
    I'm trying to setup a bash process for deploying my django project onto a linux server. Through cygwin, I'm running a script that is calling scp to copy my files over. Is there a similar command to delete *.pyc files. As of now, I've only been able to accomplish this locally after using ssh with: find . -name "*.pyc" -exec rm -rf {} \; I'm looking for some kind of command to call remotely that would be equivalent.

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >