Search Results

Search found 49453 results on 1979 pages for 'memory mapped files'.

Page 102/1979 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • How can I add the version of a file to the file name with Tortoise-SVN?

    - by Eric Belair
    I would like to start giving unique names to "cache-able" files - i.e. *.css and *.js - in order to prevent caching, without requiring changes to the web-server settings (as is currently done in IIS). For instance, let's I have a JavaScript file called global.js. Going forward I would like it to have the name global.123.js when revision 123 is checked in. This would also require the following: The previous version of the file - perhaps it was global.115.js - is removed when the file is deployed. All references to the file are updated with the new file name How do I go about doing this? What concerns do I need to consider?

    Read the article

  • I'm in a group but can't create or modify files

    - by dac
    I have two user accounts. Let's say one is User1 and the other is User2. Both of these accounts are in the "root" group. I made a folder with some files in it. The owner is User1 and the group is root. The permissions are set so the group "root" can create and delete files. However, when I log in as User2, I can only access files. User2 is in the "root" group for sure, and when I right-click on the folder in Nautilus, and then PropertiesPermissions, it says there that the "root" group can create and delete files. What's going on? edit: Logged out and then back in, and, I don't know why, everything works now...

    Read the article

  • Why Hekaton In-Memory OLTP Truly is Revolutionary

    - by merrillaldrich
    I just returned from the PASS Summit in Charlotte, NC – which was excellent, among the best I have attended – and I have had Dr. David DeWitt’s talk rolling around in my head since he gave it on Thursday. (Dr. DeWitt starts at 27:00 at that link.) I probably cannot do it justice, but I wanted to recap why Hekaton really is revolutionary, and not just a marketing buzzword. I am normally skeptical of product announcements, and I find too often that real technical innovation can be overwhelmed by the...(read more)

    Read the article

  • How to find corrupted files?

    - by rafalcieslak
    Some files on my hard drive are corrupted (no worries, nothing system-related, just a junk of data files, mp3 etc.). I found that out when I tried to burn them all to a DVD, the burning application show a message that it cannot read the files as they are corrupted. [This is probably a drive issue, it had happened me once or twice already]. I don't care about recovering them, but I have to determine which ones are corrupted. I cannot check by manually opening them all, as there are thousands of them. Is there any tricky way to check all the files and list the ones that may cause problems when tried to open?

    Read the article

  • Wiped data, and duplicated folders into files.

    - by Kaustubh P
    Something weird happened today, and I dont know how. Within a folder, all folders have a file by the same name, with a colon appended to it. And all the files from the most inner-most directory in my home, have been dumped to ~, with a size of 0 bytes. I have not executed any scripts or anything. I was just checking out some easter eggs, namely the gegls from outer space and free the fish and was away from the computer and was logged because of the screensaver. I couldnt log-back in with my password, so I just reset the PC, and while booting, the PC went into a drive check. BUT, IIRC, i saw the duplicate "folder files" before I had logged out, so thats not the reason! All the files have a timestamp of 14 Jan. Also, the contents of my eclipse folder have been dumped into ~. Right down to the jars and ini files. HELP!

    Read the article

  • Learning about BIOS memory, instructions and code origins

    - by m3taspl0it
    I'm learning about the BIOS and have a few questions. What is meant by, "This is the last 16 bytes of memory at the end of the first megabyte of memory"? The first instruction of BIOS is jump, which jumps to the main BIOS program, but where does it jump? Where does the original BIOS code originate? I'm also interested in POST? How are POST signals executed by the processor?

    Read the article

  • How to memory test in Linux?

    - by sasayins
    Hi, I'm planning to test my Linux box and I want to start in memory testing. But my problem is what should I need to test the memory in my linux box? Should I need a tool? Or there are some APIs to use to build some scripts? Thanks

    Read the article

  • XNA Shader Texture Memory

    - by Alex
    I was wondering about texture optimization in XNA 4.0. Will the the contentmanager send the texturedata to the GPU directly when the texture gets loaded or do I send the texture data to the GPU when I declare a texture in my shader. If that's the case, what happens if I have 5 shaders all using the same texture, does that mean that I send 5 instances of that texture data to the gpu or am I simply telling the GPU what preloaded texture to use? Or does XNA do the heavy lifting in the background?

    Read the article

  • Deleting "undeletable" files in Vista

    - by Nik Reiman
    I recently upgraded my workstation from XP SP3 to Vista Business, and during the upgrade Windows moved my old C:\Windows directory to C:\Windows.old. I got all of the stuff I needed out of that folder, but there are six "undeletable" files there so I cannot remove it. They are: Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-H Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-V Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelper.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroPDF.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\pdfshell.dll Whenever I try to delete the files either through explorer or a command line, I get a permission denied error. I have tried to grant myself full permission on the files, but again, permission denied. I don't even have acrobat installed on my Vista machine, and I uninstalled Adobe updater. However, I still can't manage to get rid of these files. How do I nuke them for good? Edit: I was able to take ownership of the files, but I still can't delete them. Renaming them did not work, as I was denied permission to do that as well. I'll try booting up in safe mode and getting rid of them there. Edit II: Booting up into safe mode did not allow me to delete the files. Bummer.

    Read the article

  • Finding duplicate files?

    - by ub3rst4r
    I am going to be developing a program that detects duplicate files and I was wondering what the best/fastest method would be to do this? I am more interested in what the best hash algorithm would be to do this? For example, I was thinking of having it get the hash of each files contents and then group the hashes that are the same. Also, should there be a limit set for what the maximum file size can be or is there a hash that is suitable for large files?

    Read the article

  • Hiding recent files in Unity dashboard

    - by Eric
    Ubuntu 13.04 (though had the same issue in both 12.04 LTS and 12.10). Unity desktop (yes I like it, shush). Anyways, when clicking on the dashboard there is a tab for 'Files and Folders'. I don't have any files on this computer that isn't porn. In other words, it displays the images there (as it's supposed to), but I can't have it displaying the porn for obvious reasons. I have disabled 'recent activity' and even added the folder it's all in to the 'do not record activity in the following folders'. I'm assuming that works but as I don't actually have any other files, it still displays them. I don't want to have to make it a hidden folder because it's on an external HDD and causes issues when moving from computer to computer (I have other movies on it as well). TL;DR: Get rid of the 'Files and Folders' tab in the dashboard. Is it possible?

    Read the article

  • Approach to retrieve files from server

    - by Aerus
    I'm in the process of making a Java application with a corresponding update application. At any given time the user may want to update the application and the updater will ask for a list of files of the latest release. Based on this list, the updater can determine which files need to be downloaded to complete the update. I now have 2 approaches to solve this, but i would like to know what approach will put the least stress on my application and server. I could send a list of files i want to download to my server and the server zips the files and simply returns this compressed file to the application. The updater sents a request for each seperate file to the server, which simply returns the file The application will be used mainly in Belgium and The Netherlands and connections/bandwidth tend to be pretty decent in here. The average size of a single file should be around 100Kb and at most 1Mb. I expect an update to have anywhere between 10 to 50 new files. I expect at most 100 persons/day to update the application, i.e. in the week when a new version is released. I hope this is enough information to sketch my problem and any advice is welcome. If there is another common way to tackle this, i'd be glad to hear it.

    Read the article

  • Unity Dashboard won't find local files, rearrange icons on two computers

    - by Stanton.Sculpture
    Suddenly I can't move icons around my unity launcher and the Dash won't search for my local files and folders. Was working when I first installed 13.10, but now it won't search for local files, and it won't let me rearrange the icons in any way. I've tried turning on and off all the scopes (lenses?) in multiple combinations, but it won't find any files unless I use nautilus to find them its mostly unresponsive. I can't see my recently used files, or files and folders scope at all. Dragging and dropping the icons on the side dock doesn't work, they only stick to my mouse until I put them back where they were. I cannot unlock any icons from the launcher, it just doesn't do anything when I click it. I tried rebooting both of my computers and its still won't function normally. I used ubuntu-bug -w to report a bug, no one has gotten back to me. Is there some option that I changed to cause this? This is a problem on both my laptop and Desktop. Please Help, Alex

    Read the article

  • Incorrect Dates for Downloable files in Google Snippets

    - by alds
    We have a website which create publications and newsletters. In most (if not all) the search results for our downloadable files, the Google snippets show dates which are less than when those files were actually published, from one to three months before. It would be impossible since those files did not even exist before the dates mentioned. The dates themselves do not seem to have any significance in our site. Any suggestions where the dates come from?

    Read the article

  • No Sync of Files in Android U1 folder

    - by Oldbwl
    My Desktop (Ubuntu 12.10) and Laptop (Win 7) happily sync files. The Ubuntu One folders are perfectly in sync. I have an Asus Transformer with the Ubuntu One app installed. I can read files from the cloud, and they download to a U1 folder. If I edit the files in the Asus (say a sheet in Kingsoft) it appears they never go back to the cloud unless I physically select the file to be uploaded. Is this correct?

    Read the article

  • How do Windows 7 encrypted files look like?

    - by Sean Farrell
    Ok this is kind of an odd question: How do Windows 7 (Home Premium) encrypted files look like "from the outside"? Now here is the story. An acquaintance of a freind of mine got a nasty virus / scareware. So I wiped out my PC technician cap and went to work on it. What I did was remove the drive from the laptop and put drive into my external drive bay. I scanned the drive and yes it was loaded with stuff. That basically cured the infection and I could start the system back up. To check if it cured the problem I wanted to see the system while running. There where two user accounts, on with a password and one without (both admin users !?!). So I logged into the unprotected user and cleaned up the residual issues, like proxy server to localhost in the browser config. Now I wanted to do the same for the password protected user. What I noticed that from my system and the unprotected user account the files of the protected user looked garbled. The files are something like 12 random alphanum chars, but the folders looked ok. Naive as was thought this might be how encrypted files looked "from the outside". (I never use Microsoft's own security features, so how would I know. TrueCrypt is one big blob.) Since the second user could not be reached, I though sod it and removed the password from the account. (That might have been a mistake, I know.) Now I did the same clean up tasks and all nice and fine; except for the files which where still "encrypted". So I looked into many Windows Encrypted Files recovery posts and not all hope is lost, since I should be able to extract the certificate and with the password regain access to the files. Also note that windows did "only" prompt me that removing the password would be insecure, not that access to encrypted files would be lost, like it is claimed in most recovery articles. Resetting the password did not help and I gave up for the night. The question that nagged me half of the last night was, what if the files are not encrypted, but the scare-ware encrypted / destroyed the files? I don't want to spend hours of work trying to recover files that are not recoverable. The ting is that the user does not remember turning it on and aren't the files marked in blue and the filename is readable? Many thanks for input from users who have more knowledge about WEF...

    Read the article

  • How do .so files avoid problems associated with passing header-only templates like MS dll files have?

    - by Doug T.
    Based on the discussion around this question. I'd like to know how .so files/the ELF format/the gcc toolchain avoid problems passing classes defined purely in header files (like the std library). According to Jan in that answer, the dynamic linker/loader only picks one version of such a class to load if its defined in two .so files. So if two .so files have two definitions, perhaps with different compiler options/etc, the dynamic linker can pick one to use. Is this correct? How does this work with inlining? For example, MSVC inlines templates aggressively. This makes the solution I describe above untenable for dlls. Does Gcc never inline header-only templates like the std library as MSVC does? If so wouldn't that make the functionality of ELF described above ineffective in these cases?

    Read the article

  • How to process large files in NetLogo? [closed]

    - by user65597
    I am running into problems in NetLogo with large *.csv / *.txt files. The documents can consist of about 1 million data sets and I need to read them (to eventually create a diagram based on the data). With the most straightforward source code, my program needs about 2 minutes to process these files. How should I approach reading such large data files faster in NetLogo? Is NetLogo even suitable for such tasks (as it seems to be designed more for teaching and learning)?

    Read the article

  • Tool or script to detect moved or renamed files on Linux prior to a backup

    - by Pharaun
    Basically I am searching to see if there exists a tool or script that can detect moved or renamed files so that I can get a list of renamed/moved files and apply the same operation on the other end of the network to conserve on bandwidth. Basically disk storage is cheap but bandwidth isn't, and the problem is that the files often will be reorganized or moved around into a better directory structure thus when you use rsync to do the backup, rsync won't notice that its a renamed or moved file and re-transmission it over the network all over again despite having the same file on the other end. So I am wondering if there exists a script or tool that can record where all the files are and their names, then just prior to a backup, it would rescan and detect moved or renamed files, then I can take that list and re-apply the move/rename operation on the other side. Here's a list of the "general" features of the files: Large unchanging files They can be renamed or moved around [Edit:] These all are good answers, and what I end up doing in the end was looking at all of the answers and will be writing some code to deal with this. Basically what I am thinking/working on now is: Using something like AIDE for the "initial" scan and enable me to keep checksums on the files because they are supposed to never change, so it would aid on detecting corruption. Creating an inotify daemon that would monitor these files/directory and recording any changes relating to renames & moving the files around to a log file. There are some edge cases where inotify might fail to record that something happened to the file system, thus there is a final step of using find to search the file system for files that has a change time latter than the last backup. This has several benefits: Checksums/etc from AIDE to be able to check/make sure that some media did not get corrupt Inotify keeps resource usage low and no need to re-scan the filesystem over and over No need to patch rsync; If I have to patch things I can, but I would prefer to avoid patching things to keep the burden lower, (IE don't need to re-patch everytime there is an update). I've used Unison before and its really nice, however I could've sworn that Unison does keep copies around on the filesystem and that its "archive" files can grow to be rather large?

    Read the article

  • Uploading Files Using ASP.NET Web Forms, Generic Handler and jQuery

    - by bipinjoshi
    In order to upload files from the client machine to the server ASP.NET developers use FileUpload server control. The FileUpload server control essentially renders an INPUT element with its type set to file and allows you to select one or more files. The actual upload operation is performed only when the form is posted to the server. Instead of making a full page postback you can use jQuery to make an Ajax call to the server and POST the selected files to a generic handler (.ashx). The generic handler can then save the files to a specified folder. The remainder of this post shows how this can be accomplished.http://www.bipinjoshi.net/articles/f2a2f1ee-e18a-416b-893e-883c800f83f4.aspx      

    Read the article

  • No inodes left error, df -i command says contrary

    - by abhinavkulkarni
    I copied a lot of files in my mounted Windows drive from Ubuntu and I subsequently ran into Error opening file '/media/windows/<some-file-path>': No space left on device error. I checked the output of df -i command to see if I had ran out of inodes for the mounted Windows drive: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda5 2363904 504119 1859785 22% / udev 207621 522 207099 1% /dev tmpfs 211487 450 211037 1% /run none 211487 3 211484 1% /run/lock none 211487 7 211480 1% /run/shm none 211487 19 211468 1% /run/user /dev/sda2 458686680 2588876 456097804 1% /media/windows As above output shows, lots of inodes are available for /media/windows drive. I have plenty of disk space left - around 500GB. What's the problem then?

    Read the article

  • How to tune system settings for mongoDB on Linux?

    - by jsh
    Trying to squeeze a lot out of one question here -- please bear with me. Although the MongoDB man pages make several useful recommendations about system settings like ulimit (http://docs.mongodb.org/manual/reference/ulimit/), and other production factors (http://docs.mongodb.org/manual/administration/production-notes/) they seem mysteriously silent on things like virtual memory and swap settings. The closest we get to a hint is that "...the operating system’s virtual memory subsystem manages MongoDB’s memory..." (http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram). Running the same job - high writes and high reads on about 10,000,000 records in a single collection -- on my 4-processor, 4GB RAM macbook and an 8-core ubuntu box with 64GB RAM I saw dramatically WORSE read performance on the linux box with factory settings, and could hear the disk constantly spinning, indicating high I/O and presumably swapping. Yes, other things were happening on the box, but there was plenty of free RAM, disk space, etc.; furthermore, I did not see evidence that Mongo was expanding to take advantage of all that free RAM as it is touted to do. Linux box default settings were as follows: vm.swappiness =60 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 vm.dirty_expire_centisecs =3000 vm.dirty_writeback_centisecs=500 I hazarded some guesses looking at docs and blogs for other types of databases (Oracle, MYSQL, etc.), experimented, and adjusted as below. vm.swappiness=10 vm.dirty_background_ratio=5 vm.dirty_ratio=5 vm.dirty_writeback_centisecs=250 vm.dirty_expire_centisecs=500 I saw some immediate apparent improvements in read time. However, when I ran my test jobs again, read performance continued to be painfully sluggish during heavy writes. Then, I REBUILT the collection from an available data source - and suddenly I can read at 1ms or less per record WHILE doing the write job! So the question is really two-fold: 1) What are appropriate VM settings for MongoDB on Linux? 2) (bonus) Does Mongo do some checking or optimization with the OS while data is being built? In other words, if I have built a large data set with suboptimal VM or I/O settings, does Mongo make assumptions during the memory-mapping process that will fail to take advantage of optimizations down the road? Obviously I don't fully grok memory mapping under the hood (I was hoping I wouldn't have to). Any help appreciated...thanks! -j

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >