Search Results

Search found 6101 results on 245 pages for 'incremental backup'.

Page 112/245 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Virtual Server 2005 R2 kungfu

    - by AngryHacker
    Does Virtual Server 2005 R2 have a command line interface, that's versatile enough? Here is a situation. I run a Win2k VM on an old memory constrained machine. I allocate it 378MB of RAM and the VM runs just fine. Once a month, inside the VM, I backup the (a very large) database, compress it using 7Zip and ftp it to the backup site (all in a script). Unfortunately the compression part takes a massive amount of RAM (far exceeding the 378MB), it goes for the paging file and brings absolutely everything to a crawl and literally takes 2-3 days, if left unattended. So to fix this, I have to shutdown the VM, give it temporarily 768MB of RAM and then the whole thing finishes in 20 minutes. So, is there a way do the following automatically from the host machine in a script? Shutdown the guest OS (I think, I got this part) Change the RAM allocation from 378 to 768 Start the guest OS again then, 1 hour later, do everything in reverse.

    Read the article

  • Environment variable for volume names in Windows?

    - by Shinrai
    I'm trying to write a batch file that will look at the volume label of the current drive and report if it's not equivalent to a certain string. Is there a default variable in the shell for this? Can I define one? Am I SOL and I'll actually have to do some (shudder) programming? EDIT: If this is possible in PowerShell that would work fine. (For the curious, we ship our machines with software cloning as a rapid bootable backup solution since most of our customers are daytraders and aren't interested in RAID due to urgency of getting-the-hell-back-to-work-right-away if there's a software corruption problem, and we want to make it immediately obvious if they're booting to the backup drive unintentionally, like say the primary failed entirely. The hope was just to write a simple batch file that would autostart on boot and throw a warning in the event of a problem.)

    Read the article

  • rsync osx to linux

    - by Nick
    I did a backup to a remote nfs folder with rsync, from a MAC to a Remote Debian. The final backup is 58GB less than the original. Rsync says that everything was OK, and nothing to update. Macintosh:/Volumes/Data1 root# du -sh Produccion/ 319G Produccion/ root@Disketera:/mnt/soho_storage/samba/shares# du -sh Produccion/ 260G Produccion/ can I trust in rsync? I'm using rsync -av --stats /Volumes/Data1/Produccion/ /mnt/red/ (/mnt/red is my samba mountpoint) Some differents folders root@Disketera:/mnt/soho_storage/samba/shares/Produccion/tiposok# du -sh * 0 IndoSanBol 0 IndoSans-Bold 0 IndoSans-Italic 0 IndoSans-Light 0 IndoSans-Regular 40K PalatinoLTStd-Black.otf 40K PalatinoLTStd-BlackItalic.otf 40K PalatinoLTStd-Bold.otf 44K PalatinoLTStd-BoldItalic.otf 44K PalatinoLTStd-Italic.otf 40K PalatinoLTStd-Light.otf 40K PalatinoLTStd-LightItalic.otf 40K PalatinoLTStd-Medium.otf 40K PalatinoLTStd-MediumItalic.otf 56K PalatinoLTStd-Roman.otf 12K TCL IndoSans_mac Macintosh:/Volumes/Data1/Produccion/tiposok root# du -sh * 36K IndoSanBol 40K IndoSans-Bold 36K IndoSans-Italic 36K IndoSans-Light 36K IndoSans-Regular 40K PalatinoLTStd-Black.otf 40K PalatinoLTStd-BlackItalic.otf 40K PalatinoLTStd-Bold.otf 44K PalatinoLTStd-BoldItalic.otf 44K PalatinoLTStd-Italic.otf 40K PalatinoLTStd-Light.otf 40K PalatinoLTStd-LightItalic.otf 40K PalatinoLTStd-Medium.otf 40K PalatinoLTStd-MediumItalic.otf 56K PalatinoLTStd-Roman.otf 160K TCL IndoSans_mac

    Read the article

  • How can I switch Linux running OS from disk to running from RAM without restarting?

    - by vfclists
    Is it possible to switch to running Linux from RAM or RAM disk after starting starting initially from disk? eg. You need to make an image of your hard disk, FTP it to a remote location, some time later you want the image back, so you start the system from disk as usual, restore the image you FTP'd from the remote location back into place. More like a CloneZilla backup and restore, without booting the server from CD or USB disk, but starting from the normal hard disk? Notes on environment I should have mentioned it earlier. It is a remotely hosted VM where I cannot boot into a recovery console mode or do a netinstall. It will always boot onto the same disk. Which means that if there is some serious corruption I can't repair it offline, which is why being able to ftp a previously saved backup into place is so important

    Read the article

  • Too Many Files In Debian Linux Folder?

    - by Dave Potts
    I've been using an external USB drive on a Debian server for backup. The drive is formatted as NTFS and mounted with ntsfmount. This was working fine, but I was filling up a directory with lots of files. Eventually the backup failed. When I then tried to look at the directory using ls it reported: ls: reading directory .: Numerical result out of range Looking in syslog, I also saw this: Sep 23 07:35:31 tosh ntfsmount[28040]: Failed to read index block: Numerical result out of range. Is this simply that I've reached the upper limit of number of files in a directory? If so, is there any way to extend the number of allowed files?

    Read the article

  • MS Windows issue - "Filename or extension is too long"

    - by Daniel
    I run Microsoft windows on a few of my machines. I don't know if many people know about this issue in the OS but you can't have very long filenames, from what I know Linux can have longer names, I have never run into this issue on my Linux machines. Anyway I run into issues whenever copying folders & files to backup drives. I manually backup of my data, finding and changing names of files, this is very very tedious. Is there a software tool to shorten folders or filenames that are found to be to long on Windows? I have drive image duplication software which does the job but in a way that I don't like, plus moving files can become a hassle at times if the names are too long to copy.

    Read the article

  • Entity Framework autoincrement key

    - by Tommy Ong
    I'm facing an issue of duplicated incremental field on a concurrency scenario. I'm using EF as the ORM tool, attempting to insert an entity with a field that acts as a incremental INT field. Basically this field is called "SequenceNumber", where each new record before insert, will read the database using MAX to get the last SequenceNumber, append +1 to it, and saves the changes. Between the getting of last SequenceNumber and Saving, that's where the concurrency is happening. I'm not using ID for SequenceNumber as it is not a unique constraint, and may reset on certain conditions such as monthly, yearly, etc. InvoiceNumber | SequenceNumber | DateCreated INV00001_08_14 | 1 | 25/08/2014 INV00001_08_14 | 1 | 25/08/2014 <= (concurrency is creating two SeqNo 1) INV00002_08_14 | 2 | 25/08/2014 INV00003_08_14 | 3 | 26/08/2014 INV00004_08_14 | 4 | 27/08/2014 INV00005_08_14 | 5 | 29/08/2014 INV00001_09_14 | 1 | 01/09/2014 <= (sequence number reset) Invoice number is formatted based on the SequenceNumber. After some research I've ended up with these possible solutions, but wanna know the best practice 1) Optimistic Concurrency, locking the table from any reads until the current transaction is completed (not fancy of this idea as I guess performance will be of a great impact?) 2) Create a Stored Procedure solely for this purpose, does select and insert on a single statement as such concurrency is at minimum (would prefer a EF based approach if possible)

    Read the article

  • Windows Server 2008 R2 bare metal restore to different hardware

    - by S Falken
    Scenario: I have a Windows Server 2008 R2 x64 installation whose main disk drive is now 7 years old and showing signs of age. For the last couple of months it's been displaying increased errors and requirements to run checkdisk. I have successfully created a bare metal restore (BMR) image on a separate data drive on the server, which can be seen from the Windows Recovery console; I tested it by booting to and using the Windows Server installation DVD's recovery utilities. The BMR image includes the system drive with boot partition, system state, and the D:\ drive of the server, which is where I have followed the practice of installing any program that does not require a C:\ installation path. Therefore, the BMR includes both the C:\ and D:\ drives, system state and boot partition. The C:\ drive is a 7-year old Seagate 160GB. The D:\ drive is a rather newer 120GB Western Digital. I have purchased a 128GB solid state Samsung 830 that I want to restore these partitions to, using the BMR. Questions: In the above-referenced article, Microsoft seems to be indicating that I am only able to restore to like-kind hardware, which doesn't help at all and is difficult to believe. Is this really true? I've cleaned these drives up and minimized the size of partition they require. C:\ will need about a 70GB partition, and the data on D:\ will need about 50GB. Will Windows Server backup allow me to restore the BMR to newly-created partitions on the SSD, discarding extra space? I don't need a "how-to": I just need an "is it possible". Justification: Before posting this question, I checked ServerFault articles with the following titles, but none of them were about this exact scenario: Restore SBS 2008 Backup to Same Hardware but Different Disk Configuration Restoring Windows Server 2008 to different hardware - OEM License Restoring II6 server after a hardware failure windows 2008 r2 fail to restore Domain controller failed to restore using windows backup tools How does restore to dissimilar hardware work? Migrating Windows 2008 R2 from a PC to a different PC TFS 2005 Server restore from one hardware to another I also researched Microsoft but only received an oblique answer which was not precisely aimed at my question, at the following URL: http://support.microsoft.com/kb/249694#method3

    Read the article

  • Is it safe to use consumer MLC SSDs in a server?

    - by Zypher
    We (and by we I mean Jeff) are looking into the possibility of using Consumer MLC SSD disks in our backup data center. We want to try to keep costs down and usable space up - so the Intel X25-E's are pretty much out at about 700$ each and 64GB of capacity. What we are thinking of doing is to buy some of the lower end SSD's that offer more capacity at a lower price point. My boss doesn't think spending about 5k for disks in servers running out of the backup data center is worth the investment. Just how dangerous of an approach is this and what can be done to mitigate these dangers?

    Read the article

  • How to collect Security Event Logs for a single category via Powershell

    - by Darktux
    I am trying to write a script which collects security log from all of our domain controllers hourly and stores them remotely; i can collect the security logs , but is there a way to collect the security logs by category or event number from the DC? please do let me know if any additional questions. My Code: $Eventlogs = Get-WmiObject -Class Win32_NTEventLogFile -ComputerName $computer Foreach($log in $EventLogs) { if($Log.LogFileName -eq "Security") { $Now = [DateTime]::Now $FileName = "Security" +"_"+$Now.Month+$Now.Day+$Now.Year+"_"+$Now.Hour+$Now.Minute+$Now.Second $path = "\\{0}\c$\LogFolder\$folder\$FileName.evt" -f $Computer $ErrBackup = ($log.BackupEventLog($path)).ReturnValue if($clear) { if($ErrBackup -ne 0) { "Backup failed" "Backup Error was " + $ErrBackup } } } } Copy-EventLogsToArchive -path $path -Folder $Folder }

    Read the article

  • Best way to troubleshoot apache not starting?

    - by lowgain
    We have recently gotten a backup server to mirror all our data onto in case the primary server goes down. I've gotten all the sites data updated through rsync, and all the apache config and databases updated. Both machines are on Ubuntu 9 (9.04 on the primary, 9.10 on the backup). So everything seems synced up for the most part at this point (still need to figure out user syncing), and I try to start Apache. I get * Starting web server apache2 [fail] Nothing else indicating what the problem could be. I know I don't have enough info to expect a solution from you guys, so I'd just like to know where I can go from here to further investigate this issue. Would there be any error logs for this? Thanks!

    Read the article

  • Toshiba Equium A110-252 laptop won't boot Windows.

    - by Drew Gibson
    I have a Toshiba Equium A110-252 laptop (XP Pro) which stopped booting a couple of weeks ago. The symptoms are that the laptop would display its DOS Toshiba splash screen, then the Windows (XP) boot screen would display, then a very brief blue screen flash, then back to the boot options screen (offering safe mode, safe mode with networking etc, last known good config, etc... none of these work). I assumed a duff HD, and swapped it for a new one and reinstalled from a TrueImage backup. This worked until I installed windows updates I think (the backup was a few months old), and now the PC is doing the same thing. I have installed a third, old HD (60GB) with Ubuntu 11.04 on it, and everything is fine. PC boots and runs Ubuntu beautifully. I need Windows running on it though ! Given these symptoms, could it a windows update issue ? Or might I have a hardware fault still ?

    Read the article

  • How can I restrict the backuppc client user as much as possible? (rsync)

    - by jxn
    I have backuppc making full backups of servers, but I'd like to be sure that my set up is as paranoid as possible. BackupPC is set up to backup via rsync, and it is set up to use a specific user on each client to be backed up. Because the backuppc client user has to have access to every file on the client machine and the ability to ssh into the machine without an interactive password, I'm a little nervous about securing the clients, and I'd like to know I haven't overlooked any options. Here's what I have in place: in the client user's authorized_keys file, i've included from="IPTOSERVER",command="/usr/bin/rsync" before the user's public key, so that the user can only login coming from the BackupPC server. Next, in the sudoers file, I've added this line: backuppc ALL=NOPASSWD: /usr/bin/rsync to allow root-level permissions only for the rsync command for that user. Are there other user, policy, or ssh restrictions that I can add while still allowing the backup pc client user to rsync all files?

    Read the article

  • About Hard Disk Drive Docks

    - by Crossbrowser
    I'm thinking of buying a drive dock to put my unused large HDD to use. I will also probably use the dock to backup files and swap the drives regularly. I have a few questions though: Are they noisy? I plan to use them via USB (because I don't think I have eSata connectors), am I gonna want to kill myself every time I backup? (I know it's supposed to be 480 Mbps, but how realistic is this?) Do you recommend a particular model? (I was thinking about this Startech HDD dock) Thank you

    Read the article

  • Steps to install solely ubuntu 13.04 on Dell inspiron 14z ultrabook with SSD+HDD

    - by rishy
    I have tried a few things like disabling the Intel smart response, choosing AHCI in BIOS. But there are certain problems I am still facing. I can't see my SSD during the installation of ubuntu (I am planning to install Ubuntu on my SSD and other files on HDD). When I run Ubuntu my laptop gets overheated and battery backup reduces to 90 minutes. (I guess it's related to my graphic driver ATI Raedon HD 7570). Cooling fan seems to run at its fullest, it was working much better in windows. So, overall I wanted to know what are the exact steps I need to follow to install Ubuntu on my SSD and then use my HDD to keep other files, How can I get rid of overheating and battery backup problem?

    Read the article

  • rsync to ONLY keep files in destination that have been removed from source

    - by David Corley
    We use rsync to copy filesystem contents from one machine to another as a backup. We first run MACHINE-X-MACHINE-Y rsync for a straight backup with the --delete and --delete-excluded switches We also run an internal Rsync between the MACHINE-Y destination, and another folder on MACHINE-Y with either of the delete flags. This maintains a non-destructive copy in the event someone inadvertently deletes a file on MACHINE-X. However, it also has the overhead of being a complete copy of what has already been synchronized. Ideally I want to be able to run the non-destructive rsync in such a way that the destination ONLY receives the deleted files and so avoids unnecessary duplication . Is there any way to do this?

    Read the article

  • Repair BAD Sectors or Buy a new HDD?

    - by Nehal J. Wani
    I have a Seagate internal hard disk drive. I recently opened up my laptop [Dell Inspiron N5010] [Warranty has expired], cleaned it and it worked normally after waking up from hibernation. However, when I restarted it, it stuck on windows loading screen, then tried to boot from Dell recovery partition but failed. It gave the error: Windows has encounter a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removable storage is properly connected and then restart your computer If you continue to receive this error message, contact the hardware manufacturer. Status: 0xc00000e9 Info: An unexpected I/O error has occurred. While cleaning, I had mistakenly touched the round silvery thing at the bottom of the HDD. I don't know whether this has caused the problem or not. Since I have Fedora also installed in the same HDD, I can boot from it but it shows weird read errors when I ask it to mount Windows partitions. The disk utility also says that the Hard Disk has many bad sectors and needs to be replaced. I downloaded Seatools from Seagate website and used it. In the long test, I gave it permission to repair the first 100 errors which it did successfully. Now I am confused at what I should do. Internal Hard Disk Costs: a. Internal HDD 500GB Costs: Rs3518 b.1 External HDD 500GB Costs: Rs3472 b.2 External HDD 1TB Costs: Rs5500 c. Internal to External Converter Costs: Rs650 I have the following options: (i) Buy an External HDD, backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to buy another internal HDD and replace the damaged one. OR break the seal of the external one and put it inside my laptop as internal. Breaking the case involves risks. (ii) Buy a Internal HDD and an Internal to External Converter Case [Not very reliable], backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to just put in the new internal HDD I just bought. Experts, please guide me as to what will be the most VFM option? Also, if a HDD is failing, is it that I shouldn't read from it too otherwise there is a chance of other sectors failing? What I mean is, is it wrong to read from the HDD without taking backup first?

    Read the article

  • Emacs: Changing the location of auto-save files

    - by Dominic Rodger
    I've currently got: (setq backup-directory-alist `((".*" . ,temporary-file-directory))) (setq auto-save-file-name-transforms `((".*" ,temporary-file-directory t))) in my .emacs, but that doesn't seem to have changed where auto-save files get saved (it has changed where backup files get saved. M-x describe-variable shows that temporary-file-directory is set to /tmp/, but when I edit a file called testing.md and have unsaved changes, I get a file called .#testing.md in the same directory. How can I make that file go somewhere else (e.g. /tmp/)? I've had no luck with these suggestions, so any suggestions welcome! If it helps, I'm on GNU Emacs 23.3.1, running Ubuntu.

    Read the article

  • IT merger - self-sufficient site with domain controller VS thin clients outpost with access to termi

    - by imagodei
    SITUATION: A larger company acquires a smaller one. IT infrastructure has to be merged. There are no immediate plans to change the current size or role of the smaller company - the offices and production remain. It has a Win 2003 SBS domain server, Win 2000 file server, linux server for SVN and internal Wikipedia, 2 or 3 production machines, LTO backup solution. The servers are approx. 5 years old. Cisco network equippment (switches, wireless, ASA). Mail solution is a hosted Exchange. There are approx. 35 desktops and laptops in the company. IT infrastructure unification: There are 2 IT merging proposals. 1.) Replacing old servers, installing Win Server 2008 domain controller, and setting up either subdomain or domain trust to a larger company. File server and other servers remain local and synchronization should be set up to a centralized location in larger company. Similary with the backup - it remains local and if needed it should be replicated to a centralized location. Licensing is managed by smaller company. 2.) All servers are moved to a centralized location in larger company. As many desktop machines as possible are replaced by thin clients. The actual machines are virtualized and hosted by Terminal server at the same central location. Citrix solutions will be used. Only router and site-2-site VPN connection remain at the smaller company. Backup internet line to insure near 100% availability is needed. Licensing is mainly managed by larger company. Only specialized software for PCs that will not be virtualized is managed by smaller company. I'd like to ask you to discuss both solutions a bit. In your opinion, which is better from the operational point of view? Which is more reliable, cheaper in the long run? Easier to manage from the system administrator's point of view? Easier on the budget and easier to maintain from IT department's point of view? Does anybody have any experience with the second option and how does it perform in production environment? Pros and cons of both? Your input will be of great significance to me. Thank you very much!

    Read the article

  • Some sonatype nexus questions.

    - by smallufo
    I deployed a sonatype nexus server inside my LAN , mapping some remote repositories to my public repositories : First question is , why these repositories not sync with the "real" repositories ? For example , I mapped maven central (http://repo1.maven.org/maven2) to "central" , but when I browse http://smallufo:8081/nexus/content/repositories/central/org/springframework/ , the packages are not complete , in http://repo2.maven.org/maven2/org/springframework/ , there are tons of artifacts , but I only have some of them : And versions are old ... ex : spring-core is only 2.5.6.SEC01 , but the latest version is 3.0.2.RELEASE. And my maven client seems can only find the old artifacts ... "central" is a proxy directory , it should be the same with the remote server. I tried to "Expire Cache" , "ReIndex" , "Incremental ReIndex" the whole "central" : After a long time with almost 100% java process load , the situation seems not better , just add some artifacts ... not reflecting the real "Maven Central" data... Second question , what's difference with "Expire Cache" , "ReIndex" , "Incremental ReIndex" ? Even I can "search" spring-core.3.0.2.RELEASE , my m2eclipse still cannot find it : I can also see the spring-core-3.0.2.RELEASE in the "index" , (but not available in "storage") : But why m2eclipse cannot make use of it ? it seems m2eclipse can only install artifacts in the storage , if this is how nexus works , how do I "force" download spring-core-3.0.2.RELEASE to nexus's storage ? How do I solve these strange incompatibilities ? Thanks a lot !

    Read the article

  • Applescript create event in calendar, how do I remove the default alert?

    - by zero0cool
    Running 10.8 Mountain Lion, I'm trying to create a new event with Applescript like this: set theDate to (current date) tell application "Calendar" tell calendar "Calendar" set timeString to time string of theDate set newEvent to make new event at end with properties {description:"Last Backup", summary:"Last Backup " & timeString, location:"To a local unix system", start date:theDate, end date:theDate + 15 * minutes, allday event:false, status:confirmed} tell newEvent delete every display alarm delete every sound alarm delete every mail alarm delete every open file alarm end tell end tell end tell However, this does not remove the default Calendar alert which one can set through Calendar preferences (30 minutes prior in my case). How do I create an event with no alarms at all through Applescript?

    Read the article

  • Problems restoring old backups in NetBackup 6.5

    - by gharper
    I had a server that was decommissioned & replaced last year, and since the server was no longer in use, I deleted it's client & backup policy from the NetBackup Admin Console shortly afterwards. I recently got a request to restore a file from the old server, however when I specify the source client for the restore, I get an error message saying: WARNING: server (backupserver) does not contain any backups for client (oldserver) using the specified policy type (Standard) as requested by client (backupserver). [Ok] In addition to that error, I can't seem to run a Client Backup report on the old client any more to determine what tapes I need to recall in order to re-index and restore the files... My questions: Does deleting the client somehow remove NetBackups ability to ever restore files from the old system, even if the backups have a retention period of infinity? Is there a way to restore the file from the tape, assuming I can figure out which tape I need?

    Read the article

  • Recover deleted files on windows 2008 file server

    - by aniga
    We have recently been hit by a weird virus which made all files and folders a system files/folders and also it hid all files and folders par some weird ones it created including: ..exe porn.exe secret.exe password.exe etc We have managed to restore the files with attrib command to unhide and unmark them as system files however we have noticed that we are missing some 4 to 5 folders of which (based on my luck) 2 of them are the two most important client we have. I am not sure if these files were deleted by the worm/virus or by my colleagues who are not owning up to them but the files are now gone. Worst of all, we do not have any backup what so ever (Yes I know, we should not have done that but it is a lesson learned and since last night we have created two forms of backup systems one to external device and one on the cloud, but I doubt any of that will help us now) We have 1 Windows 2008 File server and 4 client computers based on Windows 2007. I would be grateful if anyone can help us on how we can recover from this disaster which could potentially put us out of business.

    Read the article

  • Exceptions from automongobackup, yet script completes

    - by chakram88
    I am using automongobackup to, well, automate the backups of mongodb. output from the script (to STDERR) has the following exceptions (but the backup completes, and the dump files are created) ###### WARNING ###### STDERR written to during mongodump execution. The backup probably succeeded, as mongodump sometimes writes to STDERR, but you may wish to scan the error log below: exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: HostAndPort: bad port # exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed I know that the Host & Port are correct. If I run mongodump --host=127.0.0.1:27017 --journal (which is the effective command from automongobackup based on the options set and my reading of the src code) everything runs clean without any error reporting and the dump files are created as expected. Why would automongobackup report connection errors, even tho it does create the dump files, yet a straight call to mongodump does not? Debian 6.0 Lenny (from Linode image: Latest 3.2 (3.2.1-x86_64-linode23)) AutoMongoBackup VER 0.9 mongodb v 2.0.2

    Read the article

  • Autosaving on emacs or xemacs files (preferably on loss of focus)

    - by Spencer
    Ideally I want to replicate with emacs functionality from TextMate, whereby on loss of focus i.e. I click away from the buffer, my file saves. If this isn't possible, I want to customize emacs so that it will autosave the file for every character I write. When I say this I don't mean I want to autosave to the ~ backup files. I want to save the file I am currently working on. I am working on a Fedora VM. Note I am not looking for a backup or autosave. I want the file I am actually in to save, so that if I loaded the html file I am editing in a web browser it would reflect my new changes without me having to explicitly change it.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >