Search Results

Search found 51448 results on 2058 pages for 'log files'.

Page 149/2058 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Web setup project removes files after upgrade from VS2008 to VS2010

    - by Craig Shearer
    I have a web setup project built using VS2008. I've converted my solution to VS2010 and now when I build my new installer and run the install from the MSI it installs fine, then at the last step, removes all the files it's just installed. I have RemovePreviousVersions set to true. If I turn this off the files remain in place (but I get multiple instances in the Programs and Features in the control panel). If I run the install again, the files reappear. From then on, the files always remain, even when installing another version. So, the problem seems to be with running an installer built using VS2008 and then running the same installer built by VS2010. The upgrade GUIDs on each installer are the same. What is the cause and how can I fix this?

    Read the article

  • Determining what files are considered open in Mac OSX

    - by Doug
    Hi all; Apologies if this has been discussed previously... I did a stack overflow and google search but probably didn't use the right keywords. Anyway, is there an easy way to determine what files are open on Mac OS X? I had an issue in which I could not unmount a firewire HD until I closed all running apps. Turns out keychain access had a reference to a file on the HD, but it begged the question: how to find out what files are open and what app (or apps) has the file open? Thanks in advance and again apologies if this has been covered previously. Doug.

    Read the article

  • Converting Outlook Express csv adress book and dbx files into Thunderbird on W7

    - by PiotrK
    Recently I changed my OS from XP to W7. I made backup of any Outlook Express messages (the dbx files and adress book as CSV). On W7 I want to import that data into Thunderbird. There is option for importing from Outlook Express, but it is looking for live application data (I can't specify directory with real files myself) and there is no Outlook Express installed on W7 so I can't just import it back to it and then into Thunderbird. How can I import that data into Thunderbird?

    Read the article

  • Best way to transfer files across unstable LAN?

    - by JamesTheAwesomeDude
    This is very similar to Question 326211, but in this case, the LAN is an unstable Wi-Fi connection. I need to transfer about 11 GiB of files between two computers, both running Linux (although one may be rebooted into Windows.) Their connection is both slow and unstable (due to Linux's awful Wi-Fi support,) but removable media (such as a flash drive or external hard drive) is not an option at this time. Right now, I'm slowly transferring the files, one by one, across SFTP, but I have to reconnect each computer approximately every 90 seconds, and the computers are not very close to each other, so this is not feasible. This is not a duplicate of Question 30186; that one specifically concerns Windows 7, and all the proposed solutions involve closed-source, Windows-only programs (which are all spyware IMHO, and are all off the table even if I trusted them - one of the computers is Linux-only.)

    Read the article

  • Advice on creating admin panel where user can upload, remove and order the files

    - by Manoj
    I'm working on a website where a logged-in admin needs the ability to upload and manage multiple PDFs from their computer. They'd need to be able to upload/remove the files. Also, there would need to be a way that they can sort the list of uploaded files and save that order so that other visitors to the page would see the list of files in that particular order. I looked into jQuery Uploadify among other things. Would javascript be the right way to go? Thanks, Manoj

    Read the article

  • Vantec NexStar NAS Enclosures - Writing large files

    - by peter
    I have one of these 'Vantec NexStar LX - NST-475LX-BK' drive enclosures. It is a NAS device. When I write a file to the device using eSata, or a SMB share I cannot write files over 4GB. I think this is because the drive is formatted with FAT32. But when I access the device using FTP it doesn't matter. I can write files of any size. E.g. I wrote one on there last night which was 30GB. Does this make any sense? Why? I guess the most important thing for me is data integrity.

    Read the article

  • Why is IIS7 not compressing my static files?

    - by Peter Evjan
    I am trying to get IIS to compress jquery.js (and all other static files, but using jquery as the example here) on my localhost, but something goes wrong. The funny part is that when I look in my %SystemDrive%\inetpub\temp\IIS Temporary Compressed Files\MySiteName, I see the jquery.js file there, and its size is 24 KB. But in the browser, according to the Net tab on Firebug, the size is 69 kb. I've tried the following: - Checked that my browser accept compression. I found "Accept-Encoding gzip, deflate" in the request header via Firebug - Enabling Failed Request Tracing. Nothing turns up in the %SystemDrive%\inetpub\logs\FailedReqLogFiles folder after I do my request though.

    Read the article

  • Bash script to keep last x number of files and delete the rest

    - by Brady
    I have this bash script which nicely backs up my database on a cron schedule: #!/bin/sh PT_MYSQLDUMPPATH=/usr/bin PT_HOMEPATH=/home/philosop PT_TOOLPATH=$PT_HOMEPATH/philosophy-tools PT_MYSQLBACKUPPATH=$PT_TOOLPATH/mysql-backups PT_MYSQLUSER=********* PT_MYSQLPASSWORD="********" PT_MYSQLDATABASE=********* PT_BACKUPDATETIME=`date +%s` PT_BACKUPFILENAME=mysqlbackup_$PT_BACKUPDATETIME.sql.gz PT_FILESTOKEEP=14 $PT_MYSQLDUMPPATH/mysqldump -u$PT_MYSQLUSER -p$PT_MYSQLPASSWORD --opt $PT_MYSQLDATABASE | gzip -c > $PT_MYSQLBACKUPPATH/$PT_BACKUPFILENAME Problem with this is that it will keep dumping the backups in the folder and not clean up old files. This is where the variable PT_FILESTOKEEP comes in. Whatever number this is set to thats the amount of backups I want to keep. All backups are time stamped so by ordering them by name DESC will give you the latest first. Can anyone please help me with the rest of the BASH script to add the clean up of files? My knowledge of BASH is lacking and I'm unable to piece together the code to do the rest.

    Read the article

  • Securing Files over Web: Fine Grained Authorization Based File Access

    - by Nishant
    I have a system where employees can upload files. There are three ways Upload to my account in public, private or protected mode Upload to department account in public, private or protected mode Upload to organization account in public, private or protected mode where public is visible to anyone, private to the group or person only and protected to anyone in the organization. All the files for an organization are stored in a directory say, /files/<organizationId>/, on file server like files +-- 234809 | +img1.jpg | +doc1.pdf +-- 808234 | +doc2.pdf I am storing file-path and privacy level in DB. So, I can control whether to show link to a file URL to an user -- on a given page. The problem is, I do not have any control over file's URL... so, if some one types the URL to img1.jpg in his browser's address bar, there is no way to know whether a logged in user is eligible to see img1.jpg. Any suggestion? Thanks Nishant

    Read the article

  • PHP - Opening uploaded DOCX files with the correct MIME TYPE

    - by user270797
    I have users uploading DOCX files which I make available for download. The issues we have been experiencing is the unknown mime types of DOCX files which causes IE to open these docs as Zip files. It is running on a Windows/IIS server. Because this is a shared host, I cannot change any server settings. I was thinking that I could just write some code that would handle DOCX files, perhaps custom output: if (extension=docx) { header("Content-Disposition: attachment; etc) header('Content-Type: application/application/vnd.openxmlformats-officedocument.wordprocessingml.document'); Output the file contents etc } Would this be a viable solution?? If so, can someone help fill in the gaps? (PS I know the above syntax is not correct, just a quick example)

    Read the article

  • Using .inc files when theming.

    - by Nick Lowman
    Hi there, I noticed in the Zen theme there were various PHP files with the .inc file extension .i.e. template.conditional-styles.inc. I've read/watched quite a few theming tutorials but none of them mentioned these files for theming, only the template.php. Can anyone tell me when, if and how I should be using these files for theming. Many thanks

    Read the article

  • SVN Mac - Stripping files of SVN meta data?

    - by Jasconius
    I downloaded some source files on a Mac that were previously part of some working copy on the authors computer, I need to use these files in another repository but the SVN client "Versions" for Mac is picking up on the data from this old repository. I can't find the ".SVN" folder anywhere... any idea on how to "cleanse" these files so I can commit them to my repository?

    Read the article

  • Word 2007 cannot open old doc files anymore

    - by nilsi
    Since last week I am unable to open old doc files with Microsoft Word 2007. Whenever I try that I first get a warning about converters being a security issue. (I can disable that in the registry). After accepting (or disabling) this warning, however, I just get the following error message: Unglütiger Datentyp (Word 6.0/95 für Windows & Macintosh which means in English: Invalid document type (Word 6.0/95 for Windows & Macintosh I tried to goolge both the German and the translated error message but did not find anything related. The files in question can be opened by other users for the windows terminal server without problems.

    Read the article

  • Hardlink files not the same

    - by SabreWolfy
    I created a hardlink of a file as follows: ln /path/to/source/file1 /path/to/target/file2 Using md5sum, the two files are identical. After a while, the source file has been modified by another program. The target file does not get "updated". The md5sums are now different. The files are on the same partition of course, otherwise I could not create a link. What I'm trying to do is get a copy of the source file into the target folder (which is versioned), so that I have access to the source file elsewhere. I tried moving the source file to the target folder with a different name and then creating a symlink to it at the source, but the program expecting the file then (somehow) created a file of the name it wanted in the target folder. Ideas?

    Read the article

  • Java File and ByteArray or InputStream - please quick help

    - by Peter Perhác
    I want to use jFuge to play some MIDI music in an applet. There's a class for the MIDI pattern - Pattern - and the only method to load the pattern is from a File. Now, I don't know how applets load files and what not, but I am using a framework (PulpCore) that makes loading assets a simple task. If I need to grab an asset from a ZIP catalogue, I can use the Assets class which provides get() and getAsStream() methods. get() returns the given asset as a ByteArray, the other as an InputStream. I need jFuge to load the pattern from either ByteArray or InputStream. In pseudo-code, I would like to do this: Pattern.load(new File(Assets.get("mymidifile.midi"))); however there is no File constructor that would take a ByteArray. Suggestions, please?

    Read the article

  • Cleaning up temp folder after long-running subprocess exits

    - by dbr
    I have a Python script (running inside another application) which generates a bunch of temporary images. I then use subprocess to launch an application to view these. When the image-viewing process exists, I want to remove the temporary images. I can't do this from Python, as the Python process may have exited before the subprocess completes. I.e I cannot do the following: p = subprocess.Popen(["imgviewer", "/example/image1.jpg", "/example/image1.jpg"]) p.communicate() os.unlink("/example/image1.jpg") os.unlink("/example/image2.jpg") ..as this blocks the main thread, nor could I check for the pid exiting in a thread etc The only solution I can think of means I have to use shell=True, which I would rather avoid: cmd = ['imgviewer'] cmd.append("/example/image2.jpg") for x in cleanup: cmd.extend(["&&", "rm", x]) cmdstr = " ".join(cmd) subprocess.Popen(cmdstr, shell = True) This works, but is hardly elegant, and will fail with filenames containing spaces etc.. Basically, I have a background subprocess, and want to remove the temp files when it exits, even if the Python process no longer exists.

    Read the article

  • Bacula not backing up all the files it should be doing

    - by Nigel Ellis
    I have Bacula (5.2) running on a Fedora 14 system backing up several different computers including Windows 7, Windows 2003 and Windows 2008. When backing up the Windows 2008 server the backup stops after a relatively small amount has been backed up and says the backup was okay. The fileset I am trying to backup should be around 323Gb, but it manages a mere 27Gb before stopping - but not erring. I did try creating a mount on the Fedora computer to the server I am trying to backup, and Bacula managed to copy 58Gb. When I tried to use the mount to copy the files manually I was able to copy them all - there are no problems with permissions etc. on the mount. Please can anyone give a reason why Bacula would just stop? I have heard there is a 260 character limit, but some of the files that should have been copy resolve to a shorter filename than some that have been backed up.

    Read the article

  • IIS website is sending multiple content-type headers for zip files

    - by frankadelic
    We have a problem with an IIS5 server. When certain users/browsers click to download .zip files, binary gibberish text sometimes renders in the browser window. The desired behavior is for the file to either download or open with the associated zip application. Initially, we suspected that the wrong content-type header was set on the file. The IIS tech confirmed that .zip files were being served by IIS with the mime-type "application/x-zip-compressed". However, an inspection of the HTTP packets using Wireshark reveals that requests for zip files return two Content-Type headers. Content-Type: text/html; charset=UTF-8 Content-Type: application/x-zip-compressed Any idea why IIS is sending two content-type headers? This doesn't happen for regular HTML or images files. It does happen with ZIP and PDF. Is there a particular place we can ask the IIS tech to look? Or is there a configuration file we can examine?

    Read the article

  • Too many open files error in Glassfish3 while using https listener

    - by a1ex07
    I have a problem running webservice that requires https connection(Glassfish3). After running for a while, it eventually crashes. Log file show "Failed to load keystore type JKS with path ....config/keystore.jks due to ...config/keystore.jks (Too many open files)". lsof shows that a number of open files are constantly increasing (among others, I noticed that there are many files with type 'sock' and undefined protocol that never get closed). I tried changing the limit of open files, but it resulted in a longer time before crashing... I blamed the webservice, but everything works fine if the application doesn't require confidential protocol. Did I miss anything in http listener configuration ? Or it is rather an application error? Thanks in advance

    Read the article

  • Netbeans 6.8 groovy files in src/main/java

    - by Jeff Storey
    I have a new netbeans maven/groovy project, and I actually prefer to mix my java and groovy files in src/main/java and src/test/java (I find it easier to navigate this way and my pom reflects this configuration). However, when I have my project setup this way in Netbeans 6.8, it always shows the generated-sources folder in error. The stubs generated from groovy files in src/test/java can't be opened by netbeans and given an error that they can't be parsed. However, in windows explorer the files are in tact. Netbeans can run the project but it continues to prompt me that some files are in error (even though I know they're not). It's like netbeans isn't refreshing itself. Any thoughts on how to fix this? thanks, Jeff

    Read the article

  • What is the standard place for static library files on Unix/Ubuntu

    - by Max
    Hi, I am trying to install a library manually, well actually just put it in a sensible location preferably in my LIB path. I have a lib[...].a file and a bunch of headers pertaining to that static library file. If I look under /usr/lib/ I see only .so files, likewise for /lib/, /lib32/ etc. I figure I could chuck it in there, but is there any place where it can get cozy with other .a files or is that as good place as any? I'm not an library expert, but I'm pretty sure it won't matter functionally, but I'd like to learn conventional best practice. Also, where is the standard place to put the headers? Thanks!

    Read the article

  • missing .cs files in precompiled website with c# in asp.net

    - by Greg
    Hi, I need to change the code of some asp.net application but the application is missing its .cs files, there are only .aspx files. As I read in google, I understand that the application is a precompiled website. I am not too familiar with it so the question is, can I somehow retrieve the code-behind .cs files of this application because I need to change some functions there. Surely there is a way I can access them or retrieve them somehow? Thanks in advance, Greg

    Read the article

  • Randomly selecting lines from files

    - by AlgoMan
    I have bunch of files and very file has a header of 5 lines. In the rest of the file, pair of line form an entry. I need to randomly select entry from these files. How can i select random files and random entry(pair of line, excluding header) ?

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >