Search Results

Search found 37650 results on 1506 pages for 'files'.

Page 433/1506 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • Trying to upgrade SQL Server 2008 to R2 but SQL is sleeping or dead?

    - by oJM86o
    I've used the option to upgrade SQL Server 2008 to R2 but I noticed it gets to about 20-30% and it just sits on this screen: I've left it alone for over 2 hours, the PC is definately not frozen cause I can click the help or move the window around but it says "Install_sql_common_core_loc_Cpu64_1033_action: Install Files. Copying new files" for the past 2 hours. I have tried to do the install from a CD as well as a network drive, both with the same issue. Is there anything I can check or do ?

    Read the article

  • Is there a way to change the order of tabs in Foxit Reader?

    - by Harold
    In web browsers you can drag the tabs to change the order. But I can't do that in Foxit Reader (version 3.1.4.1125, using Windows Vista Home, Chinese Traditional) Example: I open 3 files: Page2.pdf Page3.pdf Page1.pdf which opens Foxit Reader with a tab for each file, in the order Page2.pdf Page3.pdf Page1.pdf Is there a way to change the order of the tabs to Page1.pdf Page2.pdf Page3.pdf ? This would really be helpful when you have many files open... TIA! Harold

    Read the article

  • How to extract hhp file from a chm file

    - by Sam
    Hi, I have an A.chm file for my windows application which runs as expected. When I decompile it using HTML workshop I get set of html files, .hhc file, .hhk file. When I compile another file B.chm from these extracted files without changing any of the files.((I want to add more html contents to this file but looks like I am losing some information after decompiling)) The output file I get is 72K where as the original file was 75K. B.chm's contents look all file when viewed in the chm viewer but the behavior is lost when when used with the application. After reading around I found that if .hhp can be extracted from a .chm file then it can be re-constructed as it is without losing any mapping or aliases. Is that true? How can I extract .hhp file from a .chm file? Thanks, Sam

    Read the article

  • Is there a way to automatically make a makefile from a template toolkit template?

    - by Smack my batch up
    My static web pages are built from a huge bunch of templates which are inter-included using Template Toolkit's "import" and "include", so page.html looks like this: [% INCLUDE top %] [% IMPORT middle %] Then top might have even more files included. I have very many of these files, and they have to be run through to create the web pages in various languages (English, French, etc., not computer languages). This is a very complicated process and when one file is updated I would like to be able to automatically remake only the necessary files, using a makefile or something similar. Are there any tools which can parse template toolkit templates and create a dependency list for use in a makefile? Or are there better ways to automate this process?

    Read the article

  • need thoughts on my interview question - .net, c#

    - by uno
    One of the questions I was asked was that I have a database table with following columns pid - unique identifier orderid - varchar(20) documentid - int documentpath - varchar(250) currentLocation - varchar(250) newlocation - varchar(250) status - varchar(15) I have to write a c# app to move the files from currentlocation to newlocation and update status column as either 'SUCCESS' or 'FAILURE'. This was my answer Create a List of all the records using linq Create a command object which would be perform moving files using foreach, invoke a delegate to move the files - use endinvoke to capture any exception and update the db accordingly I was told that command pattern and delegate did not fit the bill here - i was aksed to think and implement a more favorable GoF pattern. Not sure what they were looking for - In this day and age, do candidates keep a lot of info on head as one always has google to find any answer and come up with solution.

    Read the article

  • Eliminating code duplication in a single file

    - by Jon
    Sadly, a project that I have been working on lately has a large amount of copy-and-paste code, even within single files. Are there any tools or techniques that can detect duplication or near-duplication within a single file? I have Beyond Compare 3 and it works well for comparing separate files, but I am at a loss for comparing single files. Thanks in advance. Edit: Thanks for all the great tools! I'll definitely check them out. This project is an ASP.NET/C# project, but I work with a variety of languages including Java; I'm interested in what tools are best (for any language) to remove duplication.

    Read the article

  • Mounting a drive in Ubuntu 9.10 (Karmic Koala)

    - by morpheous
    I have just installed Ubuntu on a machine that previously had XP installed on it. The machine has 2 HDD (hard disk drives). I opted to install Ubuntu completely over XP. I am new to Linux, and I am still learning how to navigate teh file structure. However, AFAICT), there is only one drive. I want to be able to store programs etc on the first drive, and store data (program output etc) on the second drive. It appears Ubuntu is not aware that I have 2 drives (on XP, these were drives C and D). How can I mount the second drive (ideally, I want to do this automatically on login, so that the drive is available to me whenever I login - withou manual intervention from me) In XP, I could refer to files on a specific drive by prefixing with the drive letter (e.g. c:\foobar.cpp and d:\foobar.dat). I suspect the notation on ubuntu is different. How may I specify specific files on different drives? Last but notbthe least (a bit unrelated to previous questions). This relates to direcory structure again. I am a developer (C++ for desktops and PHP for websites), I want to install the following apps/ libraries. i). Apache 2.2 ii). PHP 5.2.11 iii). MySQL (5.1) iv). SVN v). Netbeans vi). C++ development tools (gcc, gdb, emacs etc) vii). QT toolkit viii). Some miscellaeous scientific software (e.g. www.r-project.org, www.gnu.org/software/octave/) I would be grateful if a someone can recommend a directory layout for these applications. Regarding development, I would also be grateful if someone could point out where to store my project and source files i.e: (i) *.cpp, *.hpp, *.mak files for cpp projects (ii) individual websites On my XP machine the layout for C++ dev was like this: c:\dev\devtools (common libs and headers etc) c:\dev\workarea (root folder for projects) c:\dev\workarea\c++ (c++ projects) c:\dev\workarea\websites (web projects) I would like to create a similar folder structure on the linux machine, but its not clear whether to place these folders under /, /usr, /home or swomewhere else (there seems to be abffling number of choices, so I want to get it "right" first time - i.e having a directory structure that most developer use, so it is easier when communicating with other ubuntu/linux developers)

    Read the article

  • find command in Linux

    - by Martin
    My goal is to find all pdf files on a remote machine, so I resort to the useful command find. So I type find ~ *.pdf or find ~ "*.pdf" and I get nothing. I do the same on my machine and I get nothing. I do a regular search from the menu on my machine and I find quite a few pdf files. Would somebody please tell me what am I doing wrong?

    Read the article

  • Thunderbird confused about EPS attachments

    - by Martin
    Sometimes, when I receive (Thunderbird 2.0.0.23 on OS X 10.5) emails with an EPS attachement sent from Mail OS X 10.4, I cannot open the EPS file. Thunderbird shows me 1 file, that can't be opened with any software but text edit. But, if I forward the email to myself, the forwarded email then has 2 files of the same name. One of the 2 forwarded files can be opened as expected. What can be wrong here?

    Read the article

  • file reading in python

    - by Jagdev
    So my whole problem is that I have two files one with following format(for Python 2.6): #comments config = { #comments 'name': 'hello', 'see?': 'world':'ABC',CLASS=3 } This file has number of sections like this. Second file has format: [23] [config] 'name'='abc' 'see?'= [23] Now the requirement is that I need to compare both files and generate file as: #comments config = { #comments 'name': 'abc', 'see?': 'world':'ABC',CLASS=3 } So the result file will contain the values from the first file, unless the value for same attribute is there in second file, which will overwrite the value. Now my problem is how to manipulate these files using Python. Thanks in advance and for your previous answers in short time ,I need to use python 2.6

    Read the article

  • cygwin slow file open

    - by Erdem
    My application uses fopen to open a lot of files. While in linux opening and reading thousand of files doesn't even take a second; in cygwin it takes more than 5 seconds. I think it is because path conversion functions in cygwin dlls. 'open' function is a bit faster. If I use -mno-cygwin it becomes very fast but I can't use it. Is there an easy way to make cygwin dlls just open files; without any linux-windows conversion?

    Read the article

  • Linux: file recovery (Urgent) [closed]

    - by Ashine
    Hi Firends, I desperately need some help regarding problem I am facing now. While creating a softlink for a very important file I gave the reverse command by mistake. Instead of giving it "ln target linkname" I have given it 'ln linkname target'. This has resulted in references pointing to target files are now pointing to links and the actual refernces to target files are lost. How can I recover the files back. "/home/user/data1" was original file location. "/home/user/db2" was the desired softlink for this data. I haveto give "ln data1 db2" but I have given 'ln db2 data1'. This has resulted in 'data1' being now pointing towards 'db2' and the actual data in 'data1' can not be retrieved. Some one please help. Thanks in advance.

    Read the article

  • What Source Control?

    - by Hein du Plessis
    I desperately need source control to manage projects between more than one developer. A long time ago I used Visual Source Safe and it worked quite well. Can anybody recommend a free substitute? I have the following basic requirements: I need to host the repository on my own server. I do not want extra clutter within my source files, like CVS does. I need proper check in / check out, so that nobody can change a module until I've checked it back in. I don't want / need source code merging / branching. We use Delphi for web development, so many html files, images, sql files, etc. Any recommendations?

    Read the article

  • emacs: force ido-mode to forget history...

    - by Stephen
    Hi, I wonder if I can keep ido from not remembering my history and only show completions for files that are in the current directory when I am searching for a file. I understand that this history feature is useful at times, but I often end up editing the incorrect file because I think I am editing file called 'abc.txt' in the current directory but in fact I am editing the file by the same name in another one that I previously visited (often happens when there is not an 'abc.txt' in the current directory, as I mistakenly assume). From reading the ido.el file I thought to set in my .emacs file (also evaluated these expressions in running emacs instance): (custom-set-variables '(ido-enable-last-directory-history nil) '(ido-record-commands nil) ) and deleted a file called .ido.last in ~/, but still it remembers some previous files I've visited before making these changes. How can I purge my previous history, and I am not entirely sure what the difference between the two variables above are but seems to have done the trick to keep ido from remembering files I visit in the future? Thanks for your help!

    Read the article

  • nginx, php-cgi and "No input file specified."

    - by Stephen Belanger
    I'm trying to get nginx to play nice with php-cgi, but it's not quite working how I'd like. I'm using some set variables to allow for dynamic host names--basically anything.local. I know that stuff is working because I can access static files properly, however php files don't work. I get the standard "No input file specified." error which normally occurs when the file doesn't exist, but it definitely does exist and the path is correct because I can access the static files in the same path. It could possibly be a permissions thing, but I'm not sure how that could be an issue. I'm running this on Windows under my own user account, so I think it should have permission unless php-cgi is running under a different user without me telling it to. . Here's my config; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; server { # Listen for HTTP listen 80; # Match to local host names. server_name *.local; # We need to store a "cleaned" host. set $no_www $host; set $no_local $host; # Strip out www. if ($host ~* www\.(.*)) { set $no_www $1; rewrite ^(.*)$ $scheme://$no_www$1 permanent; } # Strip local for directory names. if ($no_www ~* (.*)\.local) { set $no_local $1; } # Define default path handler. location / { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; index index.php index.html index.htm; # Route non-existent paths through Kohana system router. try_files $uri $uri/ /index.php?kohana_uri=$request_uri; } # pass PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; } # Prevent access to system files. location ~ /\. { return 404; } location ~* ^/(modules|application|system) { return 404; } } }

    Read the article

  • Automate Excel Text Import Wizard?

    - by Dave Mackey
    I receive files on occasion in a fixed width format. I need to import them into Excel but Excel doesn't perfectly pick up the columns. I can do it manually each time with the Text Import Wizard, but I'm wondering if there is a way to create a "text import template" or something similar - since these files are always the same format.

    Read the article

  • git status: how to ignore some changes

    - by Mr Fooz
    Is there a way to have git status ignore certain changes within a file? Background I have some files in my repository that are auto-generated (yes, I know that's typically not recommended, but I have no power to change this). Whenever I build my tree, these auto-generated files have status information updated in them (who generated them, a timestamp, etc.). When I say git status, I'd like it to run a filter on these generated files that strips out this transient status information. I only want it to show up in the "Changed but not updated:" section of git's output if there are other, real changes. Using the .gitattributes approach found at http://progit.org/book/ch7-2.html, I am able to get git diff to ignore these status line changes using a simple egrep filter. I'd like to get git status to also use textconv filters (or something equivalent). I'd prefer it if merges aren't affected by any of this filtering.

    Read the article

  • Backing Up vs. Redundancy

    - by TK Kocheran
    I'm currently in stage 2 of 3 of building my home workstation. What this means is that my RAID-0 array of solid state disks will be backed up nightly to a RAID-5 or RAID-6 array of traditional spinning hard disks. However, it recently dawned on me that redundancy is not backup. The main reason for setting up a RAID array with redundancy was to protect myself in the event of a drive failure to serve as an effective backup solution. Wait. What if a bolt of lightning finds a way to travel into my house, through my surge-protector, into my power supply and physically destroys all of my hard disks and SSDs? Well, in that case, I guess I'd be fine because I generally keep most important files (music, pictures, videos) stored in multiple places like on my laptop, my wife's laptop, and an encrypted USB hard drive. Wait. What if a giant hedgehog meteor attacks my house from space traveling at mach 3 and all machines and hard disks are blown to smithereens. Well, I guess I could find a way to do ridiculously slow and cumbersome rsyncs or backups to Amazon's Glacier. Wait. What if there's a nuclear apocalypse... and at this point I start laughing hysterically. At what point does backing up become irrelevant? I completely understand situation one (mechanical drive failure), situation two (workstation compromised or destroyed somehow), possibly even situation three (all machines and disks destroyed), but situation four? There's no questioning the need for backups. None. However, there are three questions I'd really like addressed: To what level should one backup? I definitely understand the merits of physical disk redundancy. I also believe in keeping important files on multiple machines and thinning out the possibility of losing all of my files. Online backups make sense, but they beg the following question. What should I be backing up remotely and how often? It's no problem storage-wise to back up important files (music, pictures, videos) and even configuration and temporal data for all of the machines in my network (all Linux based)... albeit locally. Transferring to the cloud is another story. Worst-case scenario, if I lost all of my configuration for my individual computers, the reality is that I probably lost the machines too. The cloud is a long way away from here; I can run backups over CAT-6 here and see 100MB/s easily, but I'm afraid that I'm only going to see 2MB/s at best when transferring up to the cloud.

    Read the article

  • bat file using winrar taking too long to run

    - by Jessie
    hi guys, i have this scripts which extracts all my folder's and files from my c:\projects locations and put its in winrar and transfers them to c:\backup\project for /f "delims==" %%D in ('DIR C:\projects /A /B /S') do ( "C:\Program Files\WinRAR\WinRAR.EXE" m -r "c:\backup\projects.rar" "%%D" ) i have also tried the below script which uses the same source c:\projects but put them in their own separate winrar folder like in the source then transfers the folders into my c:\backup. FOR /F "DELIMS==" %%D in ('DIR C:\projects /AD /B') DO ( "C:\Program Files\WinRAR\WinRAR.EXE" m -r "C:\Backup\%%D.rar" "%%D" ) my question is, my second scripts only takes two hours to run when my first script takes over 24 hours to run, is there any way to make my first script faster? if anything shouldn't my first script be faster?

    Read the article

  • asp.net: deploying local resources - is embedding possible?

    - by chris
    I have an asp.net app with some local resources. These resources are used in the aspx and code-behind files: aspx: <asp:TextBox ID="TextBox1" runat="server" Text="<%$ Resources:testTag %>" /> .vb: TextBox1.Text = GetLocalResourceObject("testTag").ToString If I deploy the .resx files with the app, there are no problems. However, if I change the build action on the resx file to "Embedded Resource", the resources aren't available, even though they're in the DLL that gets built. Is it possible to deploy resources in a DLL, or am I stuck with managing & deploying resx files on the server?

    Read the article

  • SAXException: Unexpected end of file after null

    - by itsadok
    I'm getting the error in the title occasionally from a process the parses lots of XML files. The files themselves seem OK, and running the process again on the same files that generated the error works just fine. The exception occurs on a call to XMLReader.parse(InputStream is) Could this be a bug in the parser (I use piccolo)? Or is it something about how I open the file stream? No multithreading is involved. Piccolo seemed like a good idea at the time, but I don't really have a good excuse for using it. I will to try to switch to the default SAX parser and see if that helps. Update: It didn't help, and I found that Piccolo is considerably faster for some of the workloads, so I went back.

    Read the article

  • Create javadoc with multiple src dirs

    - by Ed Marty
    I have a Util package with source files in three seperate directories, defined like so: src/com/domain/util src/Standard/com/domain/util src/Extended/com/domain/util The package is built with the first set of files and either one of the second or third set, to create a total of two different implementations of the same interface. Now, I want to generate javadoc based on those files. How can I specify that? What I really want to do is javadoc com.domain.util -sourcepath ./src;./src/Standard to build the javadoc for the standard util package, and javadoc com.domain.util -sourcepath ./src;./src/Extended to build the javadoc for the extended util package. This doesn't work. The only way I've found so far to actually make it work is to merge the directory structure of the common classes and the Standard classes into another location and run with that for the standard javadoc, then do the same for the Extended package. Is there another way?

    Read the article

  • What is a vim "runtime directory"?

    - by Andres Jaan Tack
    I'm trying to get started with things like FuzzyFinder, but I am stuck at the point where it says: INSTALLATION fuf-installation Put all files into your runtime directory. If you have the zip file, extract it to your runtime directory. You should place the files as follows: your_runtime_directory/plugin/fuf.vim ... What is my "runtime directory"? How do I know if I have one? Why does it matter how I put things into it?

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >