Search Results

Search found 39051 results on 1563 pages for 'temporary files'.

Page 69/1563 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • TSQL - How to join 1..* from multiple tables in one resultset?

    - by ElHaix
    A location table record has two address id's - mailing and business addressID that refer to an address table. Thus, the address table will contain up to two records for a given addressID. Given a location ID, I need an sproc to return all tbl_Location fields, and all tbl_Address fields in one resultset: LocationID INT, ClientID INT, LocationName NVARCHAR(50), LocationDescription NVARCHAR(50), MailingAddressID INT, BillingAddressID INT, MAddress1 NVARCHAR(255), MAddress2 NVARCHAR(255), MCity NVARCHAR(50), MState NVARCHAR(50), MZip NVARCHAR(10), MCountry CHAR(3), BAddress1 NVARCHAR(255), BAddress2 NVARCHAR(255), BCity NVARCHAR(50), BState NVARCHAR(50), BZip NVARCHAR(10), BCountry CHAR(3) I've started by creating a temp table with the required fields, but am a bit stuck on how to accomplish this. I could do sub-selects for each of the required address fields, but seems a bit messy. I've already got a table-valued-function that accepts an address ID, and returns all fields for that ID, but not sure how to integrate it into my required result. Off hand, it looks like 3 selects to create this table - 1: Location, 2: Mailing address, 3: Billing address. What I'd like to do is just create a view and use that. Any assistance would be helpful. Thanks.

    Read the article

  • Split big Apache log to folder structure

    - by Dough
    I just changed my Apache log behavior because it was making me having very BIG files... So I now use cronolog to split my logs to log/httpd/2012/11/access_2012.11.30.log for exemple, pattern : %Y/%m/access_%Y.%m.%d.log I now want to split my old 42GB file to the same structure but really don't know how to do that efficiently. I tried some simple commands with cat, egrep, awk... but really don't know how to handle all that in a more powerful script. Here is how the log looks like : x.x.237.134 - - [08/Apr/2011:14:43:09 +0200] "GET... x.x.50.15 - - [08/Apr/2011:14:43:09 +0200] "GET... [...] x.x.254.19 - - [28/Feb/2012:15:24:48 +0100] "GET... So I need for yeah line to get : year %Y (ex. 2012) month %m (ex. 11) day %d And to push out the entire line to : %Y/%m/access_%Y.%m.%d.log Can someone give me clues to get that working ? Thanks a lot for your interest.

    Read the article

  • C# Binary File Compare

    - by Simon Farrow
    I'm in a situation where I want to compare two binary files. One of them is already stored on the server with a pre-calculated Crc32 in the database from when I stored it originally. I know that if the Crc is different then the files are definitely different. However, if the Crc is the same I don't know that the files are. So what I'm looking for is a nice efficient way of comparing the two streams on from the posted file and one from the file system. I'm not an expert on streams but I'm well aware that I could easily shoot myself in the foot here as far as memory usage is concerned. Any help is greatly appreciated.

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

  • combine two GCC compiled .o object files into a third .o file

    - by ~lucian.grijincu
    How does one combine two GCC compiled .o object files into a third .o file? $ gcc -c a.c -o a.o $ gcc -c b.c -o b.o $ ??? a.o b.o -o c.o $ gcc c.o other.o -o executable If you have access to the source files the -combine GCC flag will merge the source files before compilation: $ gcc -c -combine a.c b.c -o c.o However this only works for source files, and GCC does not accept .o files as input for this command. Normally, linking .o files does not work properly, as you cannot use the output of the linker as input for it. The result is a shared library and is not linked statically into the resulting executable. $ gcc -shared a.o b.o -o c.o $ gcc c.o other.o -o executable $ ./executable ./executable: error while loading shared libraries: c.o: cannot open shared object file: No such file or directory $ file c.o c.o: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, not stripped $ file a.o a.o: ELF 32-bit LSB relocatable, Intel 80386, version 1 (SYSV), not stripped

    Read the article

  • Check to see if file transfer is complete

    - by Cymon
    We have a daily job that processes files delivered from an external source. The process usually runs fine without any issues but every once in a while we have an issue of attempting to process a file that is not completely transferred. The external source SCPs these files from a UNIX server to our Windows server. From there we try to process the files. Is there a way to check to see if a file is still being transferred? Does UNIX put a lock on a file while SCPing it that we could check on the Windows side?

    Read the article

  • Using Visual sudio .ncb file for reflection.

    - by Rushi
    I am developing visual game level editor in c++. For this I want reflection(RTTI) mechanism to know class attributes at runtime. I am currently using PDB files for this.But using PDB I couldn't retrieve actual code line for extra information in commented format which is given for that attribute. Visual studio uses NCB files for intelligence. So will it be better idea to use NCB instead PDB? If yes,How to retrieve information from NCB files? Is there any SDK like DIA SDK?

    Read the article

  • C1083 : Permission denied on .sbr files

    - by speps
    Hello, I am using Visual Studio 2005 (with SP1) and I am getting weird errors concerning .sbr files. These files, as I read on MSDN, are intermediate files for BSCMAKE to generate a .bsc file. The errors I get are, for example (on different builds) : 11string.cpp : fatal error C1083: Impossible d'ouvrir le fichier généré(e) par le compilateur : '.\debug\String.sbr' : Permission denied 58type.cpp : fatal error C1083: Impossible d'ouvrir le fichier généré(e) par le compilateur : '.\Debug/Type.sbr' : Permission denied Translation : cannot open compiler intermediate file It seems to be consistent (I have at least 5 or 6 examples like this) with a .cpp file being compiled twice in the same project, respectively : 11String.cpp *some warnings, 2 lines* 11String.cpp 58Type.cpp *some warnings and other files compiled, a lot of lines* 58Type.cpp I already checked the .vcproj files for duplicate entries and it does not seem to be the problem. I would appreciate any help regarding this issue. Deactivating the build of .bsc files seems to be a workaround but maybe someone has better information than this. Thanks.

    Read the article

  • Is my webserver being abused for banking fraud?

    - by koffie
    Since a few weeks i'm getting a lot of 403 errors from apache in my log files that seem to be related to a bank frauding scheme. The relevant log entries look like this (The ip 1.2.3.4 is one I made up, I did not modify the rest of each line) www.bradesco.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 427 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.bb.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.santander.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.banese.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" the logformat I use is: LogFormat "%V:%p %U %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" The strange thing is that all these domains are domains of banks and 3 out of the 4 domains are also in the list of the bank frauding scheme described on: http://www.abuse.ch/?p=2925 I would really like to know if my server is being abused for bank frauding or not. I suspect not, because it's giving 403 to all requests. But any extra checks that I can do to ensure that my server is not being abused are welcome. I'm also curious on how the "bad guys" expected my server to behave. I.e. are they just expecting my server to act as a proxy to hide the ip of the fake site, or are they expecting that my server will actually serve the fake banking website? Is the ip 1.2.3.4 more likely to be the ip of a victim or the ip of a bad guy. I suspect a bad guy, because it's quite unlikely that a real person would visit 4 bank sites in a second. If it's from a bad guy I'm very curious at what he is trying to do.

    Read the article

  • Most scalable way of serving a small set of static HTTP content

    - by Ekevoo
    The story: Hi guys. I'm among the people responsible for serving the results of the most anticipated (by number of people participating) annual entrance exam in my state. As such, when our results are published, the interest is overwhelming. In the past we delegated the responsibility of serving the results to the media, but that spoils a little the officialness of these results. This year we went with a little (long overdue) experiment of using lighttpd instead of Apache as well as other physical network optimizations I wasn't directly involved with. The results were very satisfactory. The server didn't choke even once, nor we saw any of the usual Twitter complaints on unavailability and/or slowness that were previously common. However, because we still delegated the first publication of the results to the media I'm still not 100% sure we can handle the load of actually publishing the results first. The question: Now because these files are like 14MB in total and a true lightweight Linux distribution isn't that big either, I'm thinking: what if next year we run full RAMdrive? Is there any? Is that useful? Is that worth it for a team that uses Debian almost exclusively? Are there other optimizations that I should be focusing on instead?

    Read the article

  • What libraries are available for manipulating super large images in .Net

    - by tpower
    I have some really large files for example 320 MB tif file with 14000 X 9000 pixels. The operations I need to perform are basically scaling the images to get smaller versions of it and breaking the image into tiles. My code works fine with small files and I use the .Net Bitmap objects but I will occasionally get Out of Memory exceptions for larger files. I've tried using the FreeImage libraries FreeImageBitmap but have the same problems. I'm using something like the following to scale the image: using (Graphics g = Graphics.FromImage((Image)result)) { g.DrawImage( source, xOffset, yOffset, source.Width * scale, source.Height * scale ); } Ideally I'd like a third party library to do all the hardwork, but if you have any tips or resources with more information I would appreciate it.

    Read the article

  • Why SQL2008 debugger would NOT step into a certain child stored procedure

    - by John Galt
    I'm encountering differences in T-SQL with SQL2008 (vs. SQL2000) that are leading me to dead-ends. I've verified that the technique of sharing #TEMP tables between a caller which CREATES the #TEMP and the child sProc which references it remain valid in SQL2008 See recent SO question. My core problem remains a critical "child" stored procedure that works fine in SQL2000 but fails in SQL2008 (i.e. a FROM clause in the child sProc is coded as: SELECT * FROM #AREAS A) despite #AREAS being created by the calling parent. Rather than post snippets of the code now, here is another symptom that may help you suggest something. I fired up the new debugger in SQL Mgmt Studio: EXEC dbo.AMS1 @S1='06',@C1='037',@StartDate='01/01/2008',@EndDate='07/31/2008',@Type=1,@ACReq = 1,@Output = 0,@NumofLines = 30,@SourceTable = 'P',@LoanPurposeCatg='P' This is a very large sProc and the key snippet that is weird is the following: **create table #Areas ( State char(2) , County char(3) , ZipCode char(5) NULL , CityName varchar(28) NULL , PData varchar(3) NULL , RData varchar(3) NULL , SMSA_CD varchar(10) NULL , TypeCounty varchar(50) , StateAbbr char(2) ) EXECUTE dbo.AMS_I_GetAreasV5 -- this child populates #Areas @SMSA = @SMSA , @S1 = @S1 , @C1 = @C1 , @Z1 = @Z1 , @SourceTable = @SourceTable , @CustomID = @CustomID , @UserName = @UserName , @CityName = @CityName , @Debug=0 EXECUTE dbo.AMS_I_GetAreas_FixAC -- this child cannot reference #Areas @StartDate = @StartDate , @EndDate = @EndDate , @SMSA_CD = @SMSA_CD , @S1 = @S1 , @C1 = @C1 , @Z1 = @Z1 , @CityName = @CityName , @CustomID = @CustomID , @Debug=0 -- continuation of the parent sProc** I can step through the execution of the parent stored procedure. When I get to the first child sproc above, I can either STEP INTO dbo.AMS_I_GetAreasV5 or STEP OVER its execution. When I arrive at the invocation of the 2nd child sProc - dbo.AMS_I_GetAreas_FixAC - I try to STEP INTO it (because that is where the problem statement is) and STEP INTO is ignored (i.e. treated like STEP OVER instead; yet I KNOW I pressed F11 not F10). It WAS executed however, because when control is returned to the statement after the EXECUTE, I click Continue to finish execution and the results windows shows the errors in the dbo.AMS_I_GetAreas_FixAC (i.e. the 2nd child) stored procedure. Is there a way to "pre-load" an sProc with the goal of setting a breakpoint on its entry so that I can pursue execution inside it? In summary, I wonder if the inability to step into a given child sproc might be related to the same inability of this particular child to reference a #temp created by its parent (caller).

    Read the article

  • Converting Multiple files to zip and saving them in ownCloud

    - by user1055380
    I wanted to convert an array with some css, js and html files into a zip file and save them in ownCloud (it has it's own framework but it's knowledge is not required.) What I am saving is an infinite loop of zip files, as in, a zip inside a zip so I can't even check that the code is working correctly or not. Please help. Here is the link to the code. <?php /* creates a compressed zip file */ $filename = $_GET["filename"]; function create_zip($files = array(),$destination = '',$overwrite = false) { //if the zip file already exists and overwrite is false, return false if(file_exists($destination) && !$overwrite) { return false; } //vars $valid_files = array(); //if files were passed in... if(is_array($files)) { //cycle through each file foreach($files as $file => $local) { //make sure the file exists if(file_exists($file)) { $valid_files[$file] = $local; } } } //if we have good files... if(count($valid_files)) { //create the archive $zip = new ZipArchive(); if($zip->open($destination,$overwrite ? ZIPARCHIVE::OVERWRITE : ZIPARCHIVE::CREATE) !== true) { return false; } //add the files foreach($valid_files as $file => $local) { $zip->addFile($file, $local); } //debug //echo 'The zip archive contains ',$zip->numFiles,' files with a status of ',$zip->status; //close the zip -- done! $zip->close(); //check to make sure the file exists return file_exists($destination); } else { return false; } } $files_to_zip = array( 'apps/impressionist/css/mappingstyle.css' => '/css/mappingstyle.css', 'apps/impressionist/css/style.css' => '/css/style.css', 'apps/impressionist/js/jquery.js' => '/scripts/jquery.js', 'apps/impressionist/js/impress.js' => '/scripts/impress.js', realpath('apps/impressionist/output/'.$filename.'.html') => $filename.'.html' ); //if true, good; if false, zip creation failed $result = create_zip($files_to_zip, $filename.'.zip'); $save_file = OC_App::getStorage('impressionist'); $save_file ->file_put_contents($filename.'.zip',$files_to_zip); ?>

    Read the article

  • SQL Server CTE referred in self joins slow

    - by Kharlos Dominguez
    Hello, I have written a table-valued UDF that starts by a CTE to return a subset of the rows from a large table. There are several joins in the CTE. A couple of inner and one left join to other tables, which don't contain a lot of rows. The CTE has a where clause that returns the rows within a date range, in order to return only the rows needed. I'm then referencing this CTE in 4 self left joins, in order to build subtotals using different criterias. The query is quite complex but here is a simplified pseudo-version of it WITH DataCTE as ( SELECT [columns] FROM table INNER JOIN table2 ON [...] INNER JOIN table3 ON [...] LEFT JOIN table3 ON [...] ) SELECT [aggregates_columns of each subset] FROM DataCTE Main LEFT JOIN DataCTE BananasSubset ON [...] AND Product = 'Bananas' AND Quality = 100 LEFT JOIN DataCTE DamagedBananasSubset ON [...] AND Product = 'Bananas' AND Quality < 20 LEFT JOIN DataCTE MangosSubset ON [...] GROUP BY [ I have the feeling that SQL Server gets confused and calls the CTE for each self join, which seems confirmed by looking at the execution plan, although I confess not being an expert at reading those. I would have assumed SQL Server to be smart enough to only perform the data retrieval from the CTE only once, rather than do it several times. I have tried the same approach but rather than using a CTE to get the subset of the data, I used the same select query as in the CTE, but made it output to a temp table instead. The version referring the CTE version takes 40 seconds. The version referring the temp table takes between 1 and 2 seconds. Why isn't SQL Server smart enough to keep the CTE results in memory? I like CTEs, especially in this case as my UDF is a table-valued one, so it allowed me to keep everything in a single statement. To use a temp table, I would need to write a multi-statement table valued UDF, which I find a slightly less elegant solution. Did some of you had this kind of performance issues with CTE, and if so, how did you get them sorted? Thanks, Kharlos

    Read the article

  • Nginx: Disallow index.html in URL

    - by Martin Vilcans
    We're generating a site consisting of only static files (using Assemble). Having the .html extension on URLs looks so nineties, so we generate every static HTML file in its own directory and call it index.html. For example, the url http://www.example.com/foo/bar/ is in the file /var/www/foo/bar/index.html. This works well, but there is one small thing nagging me: Now there are two possible URLs to the same resource: http://www.example.com/foo/bar/ (slash URL) http://www.example.com/foo/bar/index.html (index.html URL) By accident someone may link to the index.html form of the URL, which is bad for SEO and looks ugly (remember the nineties?). Is it possible in Nginx to give a 404 error on the index.html URL, but serve the slash URL? I tried this: location ~ /index\.html$ { return 404; } But it seems that Nginx does some internal rewrite of the slash URL to the index.html URL, and then matches this location so we get a 404 even on the slash URL. Note that to catch mistakes, we want index.html URLs to be an error, not just redirect to the slash URL.

    Read the article

  • Blackberry Widget Packager saves my files in strange places

    - by chibineku
    I have just installed the Blackberry Widget Packager and Blackberry Web Plug-In for Eclipse, and everything works fine, but my files are output to strange places. For example, I tried putting my zipped source files in the folder Blackberry Widget Packager/web and I got an error during packaging. Packaging works when the .zip is in the same directory as wwbp, though. When a widget is successfully created, the executable .jar file and the .rapc files are put in some stupid folder like users/user/temp/widgetname094098456, and the other files are split between two folders in Blackberry Widget Packager/bin. This is slightly annoying as I don't want to be spending time herding my files. Anyone have any thoughts on why my files are being scattered like this?

    Read the article

  • PHP FTP Upload thousands of files

    - by user275074
    Hi, I've written a small FTP class which I used to move files from a local server to a remote server. It does this by checking an array of local files with an array of files on the remote server. If the file exists on the remote server, it won't bother uploading it. The script works fine for small amounts of files, but I've noticed that the local server can have as many as 3000+ image files to transfer, this seems to cause the script to flop and only transfer a 100 or so. How can I modify the script to handle potentially thousands of image transfer files?

    Read the article

  • Optimize included files and uses in Delphi

    - by Roland Bengtsson
    I try to increase performance of Delphi 2007 and Codeinsight. In the application there are 483 files added in the DPR file. I don't know if it is imagination but I feel that I got better performance from Codeinsight by simply readd all files in the DPR. I also think (correct me if I'm wrong) that all files that are included in a uses section also should be included in the DPR file for best performance. My question is, does it exists a tool that scan the whole project and give a list what files are missing in the DPR file and what files can be removed? Would also be nice to have a list of uses that can be removed in the PAS files. Regards

    Read the article

  • Stored Procedure: Reducing Table Data

    - by SumGuy
    Hi Guys, A simple question about Stored Procedures. I have one stored procedure collecting a whole bunch of data in a table. I then call this procedure from within another stored procedure. I can copy the data into a new table created in the calling procedure but as far as I can see the tables have to be identical. Is this right? Or is there a way to insert only the data I want? For example.... I have one procedure which returns this: SELECT @batch as Batch, @Count as Qty, pd.Location, cast(pd.GL as decimal(10,3)) as [Length], cast(pd.GW as decimal(10,3)) as Width, cast(pd.GT as decimal(10,3)) as Thickness FROM propertydata pd GROUP BY pd.Location, pd.GL, pd.GW, pd.GT I then call this procedure but only want the following data: DECLARE @BatchTable TABLE ( Batch varchar(50), [Length] decimal(10,3), Width decimal(10,3), Thickness decimal(10,3), ) INSERT @BatchTable (Batch, [Length], Width, Thickness) EXEC dbo.batch_drawings_NEW @batch So in the second command I don't want the Qty and Location values. However the code above keeps returning the error: "Insert Error: Column name or number of supplied values does not match table"

    Read the article

  • Zabbix doesn't update value from file neither with log[] nor with vfs.file.regexp[] item

    - by tymik
    I am using Zabbix 2.2. I have a very specific environment, where I have to generate desired data to file via script, then upload that file to ftp from host and download it to Zabbix server from ftp. After file is downloaded, I check it with log[] and vfs.file.regexp[] items. I use these items as below: log[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,"all",\1] vfs.file.regexp[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,,\1] The line I am parsing looks like below: C: 8195Mb 5879Mb 2316Mb 28.2 The value I want to extract is 28.2 at the end of file. The problem I am currently trying to solve is that when I update the file (upload from host to ftp, then download from ftp to Zabbix server), the value does not update. I was trying only log[] at start, but I suspect, that log[] treat the file as real log file and doesn't check the same lines (althought, following the documentation, it should with "all" value), so I added vfs.file.regexp[] item too. The log[] has received a value in past, but it doesn't update. The vfs.file.regexp[] hasn't received any value so far. file.txt has got reuploaded and redownloaded several times and situation doesn't change. It seems that log[] reads only new lines in the file, it doesn't check lines already caught if there are any changes. The zabbix_agentd.log file doesn't report any problem with access to file, nor with regexp construction (it did report "unsupported" for log[] key, when I had something set up wrong). I use debug logging level for agent - I haven't found any interesting info about that problem. I have no idea what I might be doing wrong or what I do not know about how Zabbix is performing these checks. I see 2 solutions for that: adding more lines to the file instead of making new one or making new files and check them with logrt[], but those doesn't satisfy my desires. Any help is greatly appreciated. Of course I will provide additional information, if requested - for now I don't know what else might be useful.

    Read the article

  • How to duplicate all data in a table except for a single column that should be changed.

    - by twiga
    I have a question regarding a unified insert query against tables with different data structures (Oracle). Let me elaborate with an example: tb_customers ( id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3) ) tb_suppliers ( id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx, archive_id NUMBER(3) ) The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key. My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id. One solution (that works), is to iterate over all the tables in a stored procedure and do a: CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE); UPDATE temp SET archive_id=something; INSERT INTO temp (select * from temp); DROP TABLE temp; I do not like this solution very much as the DDL commands muck up all restore points. Does anyone else have any solution?

    Read the article

  • On a Hudson master node, what are the .tmp files created in the workspace-files folder?

    - by Patrick Johnmeyer
    Question: In the path HUDSON_HOME/jobs/<jobname>/builds/<timestamp>/workspace-files, there are a series of .tmp files. What are these files, and what feature of Hudson do they support? Background Using Hudson version 1.341, we have a continuous build task that runs on a slave instance. After the build is otherwise complete, including archiving the artifacts, task scanner, etc., the job appears to hang for a long period of time. In monitoring the master node, I noted that many .tmp files were being created and modified under builds//workspace=files, and that some of them were very large. This appears to be causing the delay, as the job completed at the same time that files in this path stopped changing. Some key configuration points of the job: It is tied to a specific slave node It builds in a 'custom workspace' It triggers a downstream job that builds in the same custom workspace on the same slave node

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >