Search Results

Search found 59543 results on 2382 pages for 'solution files'.

Page 198/2382 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Using SVN post-commit hook to update only files that have been commited

    - by fondie
    I am using an SVN repository for my web development work. I have a development site set up which holds a checkout of the repository. I have set up an SVN post-commit hook so that whenever a commit is made to the repository the development site is updated: cd /home/www/dev_ssl /usr/bin/svn up This works fine but due to the size of the repository the updates take a long time (approx. 3 minutes) which is rather frustrating when making regular commits. What I'd like is to change the post-commit hook to only update those files/directories that have been committed but I don't know how to go about doing this. Updating the "lowest common directory" would probably be the best solution, e.g. If committing the follow files: /branches/feature_x/images/logo.jpg /branches/feature_x/css/screen.css It would update the directory: /branches/feature_x/ Can anyone help me create a solution that achieves this please? Thanks! Update: The repository and development site are located on the same server so network issues shouldn't be involved. CPU usage is very low, and I/O should be ok (it's running on hi-spec dedicated server) The development site is approx. 7.5GB in size and contains approx. 600,000 items, this is mainly due to having multiple branches/tags

    Read the article

  • Why is lua crashing after extracting zip files?

    - by Brian T Hannan
    I have the following code but it crashes every time it reaches the end of the function, but it successfully extracts all the files and puts them in the right location. require "zip" function ExtractZipAndCopyFiles(zipPath, zipFilename, destinationPath) local zfile, err = zip.open(zipPath .. zipFilename) -- iterate through each file insize the zip file for file in zfile:files() do local currFile, err = zfile:open(file.filename) local currFileContents = currFile:read("*a") -- read entire contents of current file local hBinaryOutput = io.open(destinationPath .. file.filename, "wb") -- write current file inside zip to a file outside zip if(hBinaryOutput)then hBinaryOutput:write(currFileContents) hBinaryOutput:close() end end zfile:close() end -- call the function ExtractZipAndCopyFiles("C:\\Users\\bhannan\\Desktop\\LUA\\", "example.zip", "C:\\Users\\bhannan\\Desktop\\ZipExtractionOutput\\") Why does it crash every time it reaches the end?

    Read the article

  • Editing Multiple files in vi with Wildcards

    - by Alan Storm
    When using the programmers text editor vi, I'll often using a wildcard search to be lazy about the file I want to edit vi ThisIsAReallLongFi*.txt When this matches a single file it works great. However, if it matches multiple files vi does something weird. First, it opens the first file for editing Second, when I :wq out of the file, I get a message the bottom of the terminal that looks like this E173: 4 more files to edit Hit ENTER or type command to continue When I hit enter, it returns me to edit mode in the file I was just in. The behavior I'd expect here would be that vi would move on to the next file to edit. So, What's the logic behind vi's behavior here Is there a way to move on and edit the next file that's been matched? And yes, I know about tab completion, this question is based on curiosity and wanting to understand the shell better.

    Read the article

  • How to consolidate multiple LOG files into one .LDF file in SQL2000

    - by John Galt
    Here is what sp_helpfile says about my current database (recovery model is Simple) in SQL2000: name fileid filename size maxsize growth usage MasterScratchPad_Data 1 C:\SQLDATA\MasterScratchPad_Data.MDF 6041600 KB Unlimited 5120000 KB data only MasterScratchPad_Log 2 C:\SQLDATA\MasterScratchPad_Log.LDF 2111304 KB Unlimited 10% log only MasterScratchPad_X1_Log 3 E:\SQLDATA\MasterScratchPad_X1_Log.LDF 191944 KB Unlimited 10% log only I'm trying to prepare this for a detach then an attach to a sql2008 instance but I don't want to have the 2nd .LDF file (I'd like to have just one file for the log). I have backed up the database. I have issued: BACKUP LOG MasterScratchPad WITH TRUNCATE_ONLY. I have run multiple DBCC SHRINKFILE commands on both of the LOG files. How can I accomplish this goal of having just one .LDF? I cannot find anything on how to delete the one with fileid of 3 and/or how to consolidate multiple files into one log file.

    Read the article

  • Parsing log files in a folder in ColdFusion

    - by Simon Guo
    The problem is there is a folder ./log/ containing the files like: jan2010.xml, feb2010.xml, mar2010.xml, jan2009.xml, feb2009.xml, mar2009.xml ... each xml file would like: <root><record name="bob" spend="20"></record>...(more records)</root> I want to write a piece of ColdFusion code (log.cfm) that simply parsing those xml files. For the front end I would let user to choose a year, then the click submit button. All the content in that year will be show up in separate table by month. Each table shows the total money spent for each person. like: person cost bob 200 mike 300 Total 500 Thanks.

    Read the article

  • Flash uploader that can handle >2GB files?

    - by Alvin SMith
    Is there an open source Flash uploader that can handle files larger than 2 GB? ASP.net implementations like SlickUpload are not an option, and SWFUpload (and others that I've seen) do not handle files larger than 2 GB. Nor is requiring the user to have Java installed to run applets. This would be for both IE and Firefox. I've seen a couple "large file transfer" sites that have a Flash uploader and claim to go past the 2GB limit (which is the limit for http uploads for most browsers) so I know it is technically possible.

    Read the article

  • Fast read of certain bytes of multiple files in C/C++

    - by Alejandro Cámara
    I've been searching in the web about this question and although there are many similar questions about read/write in C/C++, I haven't found about this specific task. I want to be able to read from multiple files (256x256 files) only sizeof(double) bytes located in a certain position of each file. Right now my solution is, for each file: Open the file (read, binary mode): fstream fTest("current_file", ios_base::out | ios_base::binary); Seek the position I want to read: fTest.seekg(position*sizeof(test_value), ios_base::beg); Read the bytes: fTest.read((char *) &(output[i][j]), sizeof(test_value)); And close the file: fTest.close(); This takes about 350 ms to run inside a for{ for {} } structure with 256x256 iterations (one for each file). Q: Do you think there is a better way to implement this operation? How would you do it?

    Read the article

  • DailyRollingFileHandler ---- Files should be rotated on a daily basis

    - by nag
    We have a requirment which requires to have an Handler that is extended from Java logging and allows to have the files rotated on daily basis. Currently Java util logging do have the support of rotation based on file size by using File Handler. It doesnt support rotation on daily basis. http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6350749 Mentioned handler seems is not promising .http://www.x4juli.org/api/org/x4juli/handlers/RollingFileHandler.html So , what we are looking is for such an appender that allows daily rotation . We would like to write such handler and which is the appropriate handler to extend for ... StreamHandler or FileHandler ? And other questions are , is there way we can configure 2 different files for a single handler say FileHandler say for eg , we would like some kind of messages need to be captured in one file and other messages in other file. Would appreciate for any comments.

    Read the article

  • How to make Solution Explorer behave after clearing search?

    - by stijn
    I currently have a VS installation with no extensions to see how that works out. For navigation that means making heavy use of Ctrl+; aka Search Solution Explorer. While the search itself is ok, it has one major drawback for me that makes it a pain to use for me (both with keyboard and mouse): Solution with two projects, one collapsed, one opened: Use Ctrl+; and start typing until match found from collapsed project What I want now is to simply clear the search and return to the previous view. Seems like a pretty standard requirement, no? But there seems to be no such functionality built in. Problem with the current commands that come close (pressing Esc, clicking Back or Home buttons in Solution Explorer Toolbar) is all the same: they have the extremely annoying behaviour that they insist on suddenly uncollapsing the previously collapsed project and track the match found! (Btw the Track Active Item in Solution Explorer option is turned of in the options). This makes no sense from a UX point of view? You select some kind of 'undo' command, the search box clears which is expected, but then suddenly there's an item visible from a previous search: So if the collapsed project has like 50 items in it, solution explorer is now useless visually since it litters the screen with stuff you don't want to see, and worse you have to manually collapse the project again to return to the previous view. Is there a way around this? I thought maybe keyboard shortcuts for Back/Home would be different, but the commands do not seem to be registered. I looked into EnvDTE80.DTE2.ToolWindows.SolutionExplorer but it has no properties/methods that have anything to do with this issue. And somewhere in the tree there is a Microsoft.VisualStudio.PlatformUI.SolutionPivotNavigator which is probably the class responsible for this behaviour, but I have no idea how to access it?

    Read the article

  • How can you configure or extend BITS (Background Intelligent Transfer Service) to read files from a

    - by Mark
    I have a ASP .NET load balanced application (webservice and website). It runs on SQL server. I need to be able to provide large files for download. However, because of the load balancing situation, the files are stored in the SQL database as opposed to the file system. BITS seems to be the best approach. I have full control of the client. However, i don't know how to configure BITS to read the file from the database. I know how to write the C# code for that, but i don't know how to get BITS to hook into it as opposed to reading the file from the file system. Any ideas?

    Read the article

  • golang dynamically parsing files

    - by Brian Voelker
    For parsing files i have setup a variable for template.ParseFiles and i currently have to manually set each file. Two things: How would i be able to walk through a main folder and a multitude of subfolders and automatically add them to ParseFiles so i dont have to manually add each file individually? How would i be able to call a file with the same name in a subfolder because currently I get an error at runtime if i add same name file in ParseFiles. var templates = template.Must(template.ParseFiles( "index.html", // main file "subfolder/index.html" // subfolder with same filename errors on runtime "includes/header.html", "includes/footer.html", )) func main() { // Walk and ParseFiles filepath.Walk("files", func(path string, info os.FileInfo, err error) { if !info.IsDir() { // Add path to ParseFiles } return }) http.HandleFunc("/", home) http.ListenAndServe(":8080", nil) } func home(w http.ResponseWriter, r *http.Request) { render(w, "index.html") } func render(w http.ResponseWriter, tmpl string) { err := templates.ExecuteTemplate(w, tmpl, nil) if err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) } }

    Read the article

  • How to merge elements from two XML files?

    - by Googler
    Hi all, I need a c# lang code to merge two xml files into one, from the specified content. XML FILE 1 : <exchange-documents> <documentlegal> <bibliographic-data> <applicants> <applicant-name> <name>CENTURY PRODUCTS CO [US]</name> </applicant-name> </applicants> </bibliographic-data> </documentlegal> </exchange-documents> XML FILE 2: <exchange-documents> <documentpatent> <bibliographic-data> <applicants> <applicant-name> <name>CENTURY PRODUCTS CO [US]</name> </applicant-name> </applicants> </bibliographic-data> </documentpatent> </exchange-documents> I need to read the above two xml files and write it into a new xml files with selected elements? OUTPUT XML: <documentlegal> <bibliographic-data> <applicants> <applicant-name> <name>CENTURY PRODUCTS CO [US]</name> </applicant-name> </applicants> </bibliographic-data> </documentlegal> <documentpatent> <bibliographic-data> <applicants> <applicant-name> <name>CENTURY PRODUCTS CO [US]</name> </applicant-name> </applicants> </bibliographic-data> </documentpatent> I dont need the exchnage document element. Can anyone provide me a c# code to achiev the above scenario?

    Read the article

  • Document Management System - Where to Store Files?

    - by Diego AC
    Hey, stack! I'm on charge of building an ASP.NET MVC Document Management System. It have to be able to do basic document management tasks like adding, editing and searching entries and also perform versioning. Anyways, I'm targeting PDF, Office and many image formats as the file attached to each document entry in the database. My question is: What design guidelines do pros follow when building the storage mechanism? Do they store the document files in the file system? Database? How file uploading is handled? I used to upload the files to a temporal location while the user was editing the data and move it to permanent storage when the user confirmed the entry creation. Is this good? Any suggestions on improvement?

    Read the article

  • System.Web.HttpException in asp.net mvc 2 on images and javascript files

    - by Rippo
    Hi I am getting the following errors reported by ELMAH on my asp.net mvc 2 site for javascript files, images etc. System.Web.HttpException: The remote host closed the connection I have done some research and it appears that the user/bot is clicking a link on the site before the page has fully loaded. Now this error never occurs on a controller action but always on a file that is on disk. e.g. /Content/CmsImages/logo.png /Content/CmsImages/MemberImages/Photo-001605.jpg /Content/jquery.tickertype.js So this means that all static files are being routed through the mvc pipeline. What options do I have?

    Read the article

  • Configuring Hadoop logging to avoid too many log files

    - by Eric Wendelin
    I'm having a problem with Hadoop producing too many log files in $HADOOP_LOG_DIR/userlogs (the Ext3 filesystem allows only 32000 subdirectories) which looks like the same problem in this question: http://stackoverflow.com/questions/2091287/error-in-hadoop-mapreduce My question is: does anyone know how to configure Hadoop to roll the log dir or otherwise prevent this? I'm trying to avoid just setting the "mapred.userlog.retain.hours" and/or "mapred.userlog.limit.kb" properties because I want to actually keep the log files. I was also hoping to configure this in log4j.properties, but looking at the Hadoop 0.20.2 source, it writes directly to logfiles instead of actually using log4j. Perhaps I don't understand how it's using log4j fully. Any suggestions or clarifications would be greatly appreciated.

    Read the article

  • how to know what files or folder are changed before do commit

    - by Pedro
    My problem is how to know what files or folder are changed before do commit. I can add all the new files in my working copy before do commit, and the repository changes, but if for example i delete one file of the working copy i dont know the way to add this change before do commit. When you use the tortoise for example before do commit the program shows all the changes of the working copy and you can choose what changes commit and what changes dont. There is some way to do this usin sharp svn?? thanks for your answer!!!

    Read the article

  • how to fetch a range of files from an FTP server using C#

    - by user260076
    hello all, i'm stuck at a point where i am using a wildcard parameter with the FtpWebRequest object as suck FtpWebRequest reqFTP = (FtpWebRequest)FtpWebRequest.Create(new Uri("ftp://" + ftpServerIP + "/" + WildCard)); now this works fine, however i now want to fetch a specific range of files. say the file naming structure is *YYYYMMDD.* and i need to fetch all the files prior to today's date. i've been searching for a wildcard pattern for that with no good results, one that will work in a simple file listing. and it doesn't look like i can use regex here. any thoughts ?

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >