Search Results

Search found 18154 results on 727 pages for 'track changes'.

Page 101/727 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Alternative Windows Offline Files + Windows Backup + Previous Version Setup

    - by Herson
    Currently our documents are all hosted in a Windows 7 box. Users can access the files using Windows share and the documents are available offline (windows 7 feature). The documents are being backed up daily by Windows 7 backup and restore utility. Users can access previous versions of the file (from the backups) using Windows Explorer "previous versions" feature. This setup is currently working well, except for the following: We would prefer to have access to hourly versions of the file, not daily. The previous version mechanism is tied up to the backup mechanism. Windows 7 performs a full backup every week and incremental backup everyday. The previous versions of a file is actually what are the available in the backups. If you 20GB documents and want to maintain at least three(3) year history, you will use at minimum 3 years * 52 weeks * 20GB or about 3TB even if there are few changes in the documents. Its pretty inefficient use of space. Looking up previous versions of a file is very slow (tens of minutes). This is probably related to the previous issue - Windows has to traverse its all of its backups. I am considering using SVN + autocommit/autoupdate tortoisesvn. It will have the following advantages: Backups are easy and will also backup the whole history of each documents. (Just backup the repository). Creating previous versions can be frequent. I think svn commit / update can be done every two minutes or so. Users can sync over the net. However, I can see the following issues: More conflicts than the original setup because both multiple users can now edit the same file even both are online, i.e. can connect to the SVN repo. The users can off course lock the file first before editing, but that would mean they have to adjust. Delay on propagation of file changes. On windows 7 file sharing, changes made by one online user will be instantaneously available to other online users. With the SVN setup, changes will only be propagated when the users execute the svn add/commit/update sequence. Delay will be probably a few minutes. This workflow will no longer work: "Hi, I just edited document X, can you have a quick look?" I would like to ask the opinion of the community for alternative setups, or improvements on the above setups to work out the kinks.

    Read the article

  • Access SharePoint 2010 site with Google Chrome Frame inside Internet Explorer 6

    - by aphoria
    Is it possible to use Google Chrome Frame in IE 6 to access a SharePoint 2010 site without displaying the WarnOnUnsupportedBrowsers message. The message I get is this: Your Web browser will have problems displaying this web page. Changes to the site may not function properly. For a better experience, please update your browser to its latest version. Ideally, I'm hoping to only have to make changes to the client, but if the site itself needs to modified I can probably get it done.

    Read the article

  • Recommendations or advice for shared computer control

    - by Telemachus
    Basic scenario: we are a school (overwhelmingly Mac, some Windows machines via BootCamp), and we are considering using DeepFreeze to guard the state of our shared machines. We have roughly 250 machines that are either shared laptops (which move around quite a bit) or common desktops in public spaces. Obviously, we spend a lot of time maintaining the machines and trying to reverse the inevitable drift as people make changes to the computers. We would like to control the integrity of the build we initially put onto the machines without handcuffing users and especially without using Mac's Parental Control software. (We've had nothing but bad experiences with it.) We've been testing DeepFreeze, and so far it's very impressive. But I'm curious to hear if people who have used DeepFreeze or any similar software have any advice or tips. To get things started, I will post my own pros and cons. Pros: The state of the machine is frozen in our chosen state. All changes made to the machine after that disappear upon restart. (This frozen state really appears to cover everything. I have yet to do something to a test machine that isn't instantly healed.) Tons of trivial but time-consuming maintenance is gone in an instant. Also, lots of not-so-trivial breakage should be avoided. There are good options, however, that allow you to create storage spaces either globally or per user. (Otherwise, stored files disappear upon reboot. For some machines, this is a good option itself. Simply warn people: save externally or else; this machine is a kiosk, not your storage space.) Cons: Anytime we actually need to make a change (upgrade basic software, add a printer or an airport permanently, add new software), the process is a bit more complex. Reboot into a special mode (thaw state), make changes, reboot back into frozen mode. If (when?) we forget this, we will end up making changes that disappear after the next reboot. Users will forget to save files correctly (in the right place or externally), and we will have loud, unpleasant conversations explaining that we can't recover the document they worked on all afternoon yesterday. The machine rebooted. The file is gone. These are my initial thoughts, but I would love to hear from other people who have experience with DeepFreeze or any similar software. What should we be careful about? Do the pros outweigh the cons? What gains or problems am I not seeing? Thanks.

    Read the article

  • ftp server monitoring

    - by Supra Man
    I need to monitor some ftp servers for any changes in file structure, things that I need to monitor is how many times an file is downloaded (not sure if possible), if files are changed or not, if files are deleted or not, if ftp server still exists, i would like this to be something that i can run server=side and would like a sms message or email if any of the above changes have occured any one have any experience or would recommend an particular language or script? thanks just for reference, i don't want to install an ftp server, i just want something to help me monitor other remote ftp servers by periodically logging in

    Read the article

  • Parse and validate Asterisk dialplan before commiting

    - by adaptive
    I recently made a number of changes to my Asterisk dialplan and would like to validate these changes before I commit. I am thinking more from a "write code" - "compile" - "debug" approach. I am very new to Asterisk and am trying to build my dialplan slowly but the server is already in use (by the spouse) so I'd like to minimize interruptions as much as possible. If I can at-least verify that the code is correct, I can then debug in Asterisk as calls are taking place.

    Read the article

  • Fixing BIND9 rndc error "connection to remote host closed"

    - by Josh
    I just made some changes to a DNS zone in Webmin and clicked the "Apply Changes" button. I received the error message: rndc: connection to remote host closed This may indicate that the remote server is using an older version of the command protocol, this host is not authorized to connect, or the key is invalid How can I troubleshoot / repair this? I copied parts of the BIND config from a failing server, so I suspect that's what causing it...

    Read the article

  • Fedora 17 not saving iptables

    - by Louis W
    For some reason my Fedora is not saving changes made to my iptables. iptables -I INPUT -p tcp --dport 80 -j ACCEPT iptables -I INPUT -p tcp --dport 443 -j ACCEPT service iptables status service iptables restart Redirecting to /bin/systemctl status iptables.service Then when starting, my changes are not there anymore. Also tried saving: [root@VTM01 ~]# service iptables save Redirecting to /bin/systemctl save iptables.service Unknown operation save

    Read the article

  • Do I need to recycle web server after modifying hgrc?

    - by slolife
    I have setup a Mercurial website in IIS7 using this tutorial: http://mercurial.selenic.com/wiki/HgWebInIisOnWindows I am slowly figuring out all of the options that I can tweak for the served repositories. But I'd like to know if and when I need to recycle the website process in order to pick up changes made to any of the repositories' hgrc files? Does the website pick up the changes on the next request or do I need to always recycle? Additionally, do I need to "restart" the website or run iisreset?

    Read the article

  • How to use Git over multiple similar systems

    - by Spidfire
    I have a system I need to duplicate over several systems and make minor changes like change less/css variables and configuration files. Is there a best practice for these kind of problems? I currently do: git clone repo cp ../default/config.js config.js ... for several files or should I create different branches of the same repo or should I create an repo for the changes? It is currently doable but it will get annoying if I get more than 5 similar systems.

    Read the article

  • EC2 Instance of Wordpress not mapping URLs correctly

    - by Benjamin
    I'm using an AWS EC2 micro instance to run a wordpress blog. I've successfully mapped a subdomain to the Elastic IP for the micro instance. After a few minor changes, the URL I mapped to the Elastic IP (blog.example.com) opens up the wordpress home page but whenever I click on any of the wordpress links the domain changes to the AWS public DNS for that instance (http://ec2-123-45-678-910.compute-1.amazonaws.com/wordpress/). How do I fix the URLs so that they all follow the subdomain I have setup?

    Read the article

  • Booting a virtual os on a physical environment

    - by Nrew
    Is there an application that allows you to add a virtualized version of your operating system into the mbr? By virtualized, I mean that if you boot this one, any changes made in it will not be committed to the disk. Just like how deepfreeze, returnil, and shadow user works. Is it possible to do that? Because the applications mentioned above requires you to reboot if you want that changes will not be retained.

    Read the article

  • Hard drive restore on reboot on windows embedded

    - by sav
    My company has an old out of service device with windows embedded on it that we want to re purpose. Any changes to the drive (SD Card with 2 partitions), (ie: installed software, ip address, system settings, files) are reset/deleted when we reboot the device. We can successfully make changes to the drive by plugging it into a PC, but that has its limitations and we would like to be able to use our device. Can anyone tell us more about the technology used for doing this and how/if we can disable it?

    Read the article

  • Apache restart on every request

    - by Michael Gummelt
    In development, I'd like to have changes to my application propagate immediately. "MaxRequestsPerChild 1" restarts each process after a request, but if there are multiple server processes, changes still don't propagate until each process restarts. I've tried several different directives to limit the number of server processes to 1: StartServers 1 MinSpareThreads 1 MaxSpareThreads 1 ThreadLimit 1 ThreadsPerChild 1 MaxClients 1 MaxRequestsPerChild 1 Apache still starts with multiple (3) apache2 processes. I'm using the mpm_worker module

    Read the article

  • How to make a searchable PDF document from a scan AND a source Word document?

    - by Evengard
    Well, I have a scanned PDF with some slightly changes made by hand and a source file. I wish to make a PDF, which would be searchable (based on the text from the source, the changes would remain as they are). I am searching a free (and even better - portable) software which would allow me to somehow "combine" the images from a scan and the text from the source DOC file. So it SEEMS like the image is selectable and searchable.

    Read the article

  • Windows 7: Always remember UAC choice for an application

    - by Svish
    I have some applications that I open from time to time, and I always get this UAC message with Do you want to allow the following program from an unknown publisher to make changes to this computer? Is there a way I can mark a single program so that won't ask me that again? Like, I think it is good that it asks me the first time, but some programs I do launch more often, and I am ok with them making changes and don't want to be asked all the time.

    Read the article

  • Announcing SonicAgile – An Agile Project Management Solution

    - by Stephen.Walther
    I’m happy to announce the public release of SonicAgile – an online tool for managing software projects. You can register for SonicAgile at www.SonicAgile.com and start using it with your team today. SonicAgile is an agile project management solution which is designed to help teams of developers coordinate their work on software projects. SonicAgile supports creating backlogs, scrumboards, and burndown charts. It includes support for acceptance criteria, story estimation, calculating team velocity, and email integration. In short, SonicAgile includes all of the tools that you need to coordinate work on a software project, get stuff done, and build great software. Let me discuss each of the features of SonicAgile in more detail. SonicAgile Backlog You use the backlog to create a prioritized list of user stories such as features, bugs, and change requests. Basically, all future work planned for a product should be captured in the backlog. We focused our attention on designing the user interface for the backlog. Because the main function of the backlog is to prioritize stories, we made it easy to prioritize a story by just drag and dropping the story from one location to another. We also wanted to make it easy to add stories from the product backlog to a sprint backlog. A sprint backlog contains the stories that you plan to complete during a particular sprint. To add a story to a sprint, you just drag the story from the product backlog to the sprint backlog. Finally, we made it easy to track team velocity — the average amount of work that your team completes in each sprint. Your team’s average velocity is displayed in the backlog. When you add too many stories to a sprint – in other words, you attempt to take on too much work – you are warned automatically: SonicAgile Scrumboard Every workday, your team meets to have their daily scrum. During the daily scrum, you can use the SonicAgile Scrumboard to see (at a glance) what everyone on the team is working on. For example, the following scrumboard shows that Stephen is working on the Fix Gravatar Bug story and Pete and Jane have finished working on the Product Details Page story: Every story can be broken into tasks. For example, to create the Product Details Page, you might need to create database objects, do page design, and create an MVC controller. You can use the Scrumboard to track the state of each task. A story can have acceptance criteria which clarify the requirements for the story to be done. For example, here is how you can specify the acceptance criteria for the Product Details Page story: You cannot close a story — and remove the story from the list of active stories on the scrumboard — until all tasks and acceptance criteria associated with the story are done. SonicAgile Burndown Charts You can use Burndown charts to track your team’s progress. SonicAgile supports Release Burndown, Sprint Burndown by Task Estimates, and Sprint Burndown by Story Points charts. For example, here’s a sample of a Sprint Burndown by Story Points chart: The downward slope shows the progress of the team when closing stories. The vertical axis represents story points and the horizontal axis represents time. Email Integration SonicAgile was designed to improve your team’s communication and collaboration. Most stories and tasks require discussion to nail down exactly what work needs to be done. The most natural way to discuss stories and tasks is through email. However, you don’t want these discussions to get lost. When you use SonicAgile, all email discussions concerning a story or a task (including all email attachments) are captured automatically. At any time in the future, you can view all of the email discussion concerning a story or a task by opening the Story Details dialog: Why We Built SonicAgile We built SonicAgile because we needed it for our team. Our consulting company, Superexpert, builds websites for financial services, startups, and large corporations. We have multiple teams working on multiple projects. Keeping on top of all of the work that needs to be done to complete a software project is challenging. You need a good sense of what needs to be done, who is doing it, and when the work will be done. We built SonicAgile because we wanted a lightweight project management tool which we could use to coordinate the work that our team performs on software projects. How We Built SonicAgile We wanted SonicAgile to be easy to use, highly scalable, and have a highly interactive client interface. SonicAgile is very close to being a pure Ajax application. We built SonicAgile using ASP.NET MVC 3, jQuery, and Knockout. We would not have been able to build such a complex Ajax application without these technologies. Almost all of our MVC controller actions return JSON results (While developing SonicAgile, I would have given my left arm to be able to use the new ASP.NET Web API). The controller actions are invoked from jQuery Ajax calls from the browser. We built SonicAgile on Windows Azure. We are taking advantage of SQL Azure, Table Storage, and Blob Storage. Windows Azure enables us to scale very quickly to handle whatever demand is thrown at us. Summary I hope that you will try SonicAgile. You can register at www.SonicAgile.com (there’s a free 30-day trial). The goal of SonicAgile is to make it easier for teams to get more stuff done, work better together, and build amazing software. Let us know what you think!

    Read the article

  • Mercurial push error - hook failed

    - by raychenon
    I committed some changesets. Now I want to push them to remote repository. I get this error during push pushing to http://hguser:***@z2xeu:1337/hg/cms searching for changes 1 changesets found remote: adding changesets remote: adding manifests remote: adding file changes remote: added 1 changesets with 2 changes to 2 files remote: Attempt to commit or push text file(s) using CRLF line endings remote: in 700e14d32918: src/main/webapp/WEB-INF/jsp/search.jsp remote: remote: To prevent this mistake in your local repository, remote: add to Mercurial.ini or .hg/hgrc: remote: remote: [hooks] remote: pretxncommit.crlf = python:hgext.win32text.forbidcrlf remote: remote: and also consider adding: remote: remote: [extensions] remote: hgext.win32text = remote: [encode] remote: ** = cleverencode: remote: [decode] remote: ** = cleverdecode: remote: transaction abort! remote: rollback completed remote: abort: pretxnchangegroup.crlf hook failed [command returned code 1 Wed Jan 12 11:14:55 2011] To prevent this, I wrote in the .hg/hgrc file ( there is no Mercurial.ini ) [hooks] pretxncommit.crlf = python:hgext.win32text.forbidcrlf pretxnchangegroup.crlf = python:hgext.win32text.forbidcrlf [extensions] hgext.win32text = But I still get the same error as above. I'm pretty sure pretxnchangegroup.crlf is involved in this, maybe a python file ? I use only unicode characters in files committed

    Read the article

  • 2010 Collaboration Summit Impressions

    - by Elena Zannoni
    It's a bit late, but there you have it anyway. April 14 to 16 I attended the Linux Foundation Collaboration Summit in SFO. I was running two tracks, one on tracing and one on tools. You can see the tracks and the slides here: http://events.linuxfoundation.org/events/collaboration-summit/slides I was pretty busy both days, Thursday with a whole day tracing track, Friday with a half day toolchain track. The sessions were well attended, the rooms were full, with people spilling in the hallways. Some new things were presented, like Kernelshark, by Steve Rostedt, a GUI (yes, believe it or not, a GUI) written in GTK. It is very nice, showing a timeline for traced kernel events, and you can zoom in and filter at will. It works on the latest kernels, and it requires some new things/fixes in GTK. I don't recall exactly what version of GTK though. Dominique Toupin from Ericsson presented something about user requirements for tracing. Mostly though about who's who in the embedded world, and eclipse. Masami and Mathieu presented an update on their work. See their slides. The interesting thing to me was of course the new version of uprobes w/o underlying utrace presented by Jim Keniston. At the end of the session we had a discussion about the future of utrace. Roland wasn't there, butTom Tromey (also from RedHat) collected the feedback. Basically we are at a standstill now that utrace has been rejected yet again. There wasn't much advise that anybody could give, except jokingly, we decided that the only way in is to make it a part of perf events. There needs to be another refactoring, but most of all, this "killer app" that would be enabled because of utrace hasn't materialized yet. We think that having a good debugging story on Linux is enough of a killer app, for instance allowing multiple tracers, and not relying on SIGCHLD etc. I think this wasn't completely clear to the kernel community. Trying to achieve debugging via a gdb stub inside the kernel interfacing to utrace and that is controlled via the gdb remote protocol also lost its appeal (thankfully, since the gdb remote protocol is archaic). Somebody would have to be creative in how to submit utrace. It doesn't have to be called utrace (it was really a random choice, for lack of a letter that was not already used in front of the word "trace"). So basically, I think the ideas behind utrace are sound, and the necessity of a new interface is acknowledged. But I believe the integration/submission process with the kernel folks has to restart from scratch, clean slate. We'll see. There are many conferences and meetings coming up in the near future where things can be discussed further. On the second day, Friday, we had the tools talks. It was interesting to observe the more "kernel" oriented people's behavior towards the gcc etc community. The first talk was by Mark Mitchell, about Gcc and its new plugin architecture. After that, Paolo talked about the new C++1x standard, which will be finalized in 2011. Many features are already implemented in the libstdc++ library and gcc and usable today. We had a few minutes (really, the half day track was quite short) where Bradley Kuhn from the Software Freedom Law Center explained the GPLv3 exception for gcc (due to the new gcc plugin architecture and the availability of the intermediate results from the compilation, which is a new thing). I will not try to explain, but basically you cannot take the result of the preprocessing and then use that in your own proprietary compiler. After, we had a talk by Ian Taylor about the new Gold linker. One good thing in that area is that they are trying to make gold the new default linker (for instance Fedora will use gold as the distro linker). However gold is very different from binutils' old linker. It doesn't use a linker script, for instance. The kernel has been linked with gold many times as an exercise (the ground work was done by Kris Van Hees), but this needs to be constantly tested/monitored because the kernel linker script is very complex, and uses esoteric features (Wenji is now monitoring that each kernel RC can be built with gold). It was positive that people are now aware of gold and the need for it to be ported to more architectures. It seems that the porting is very easy, with little arch dependent code. Finally Tom Tromey presented about gdb and the archer project. Archer is a development branch of gdb mostly done by RedHat, where they are focusing on better c++ printing, c++ expression parsing, and plugins. The archer work is merged regularly in the gdb mainline. In general it was a good conference. I did miss most of the first day, because that's when I flew in. But I caught a couple of talks. Nothing earth shattering, except for Google giving each person registered a free Android phone. Yey.

    Read the article

  • How to undo an hg merge? (I think.)

    - by Grumdrig
    I'm new to collaborating with Mercurial. My situation: Another programmer changed rev 1 of a file to replace 4-space indents with 2-space indent. (I.e. changed every line.) Call that rev 2, pushed to the remote repo. I've committed substantive changes rev 1 with various code changes in my local workspace. Call that rev 3. I've hg pulled and hg mergeed without a clear idea of what was going on. The conflicts are myriad and not really substantive. So I really wish I'd changed my local repo to 2-space indents before merging; then the merge will be trivial (i'm supposing). But I can't seem to back up. I think I need to hg update -r 3 but it says abort: outstanding uncommitted merges. How can I undo the merge, changes spacing in my local repo, and remerge?

    Read the article

  • Agile Testing Days 2012 – Day 1 – The birth of the #unicorn…

    - by Chris George
    Still riding the high from the tutorial day, I arrived at the conference venue eager to get cracking with the days talks. The opening Keynote was “Disciplined Agile Delivery: The Foundation for Scaling Agile” presented by Scott Ambler. The general ideas behind the methodology such as not re-inventing the wheel, and being goal driven, not prescriptive in how you work certainly struck chords with how we are trying to work in my team. Scott made some interesting observations about how scrum is quite prescriptive and is this really agile? I agreed with quite a few of his points on how what works for one team may not work for another. How a team works should be driven by context and reflection, not process and prescription. However was somewhat dubious about some of the statistics he rolled out towards the end. However, out of this keynote was born something that was to transcend this one presentation. During the talk, Scott mentioned on more than one occasion “In the real world”, and at one point made reference to people living in the land of unicorns and rainbows. The challenge was then laid down on twitter for all speakers to include a unicorn in their presentations… and for the most part this happened! It became an identity for this years conference, and I’m sure something that any attendee will always associate with Agile Testing Days 2012! Following this keynote, I attended “Going agile with Automated GUI Testing – Some personal insights” by Jan Zdunek from codecentric on the vendor track. My speciality is test automation, and in particular GUI testing, so this drew me to this talk more than the others. Thankfully, it was made clear from the very start that this was not peddling any particular product (even though it was on the vendor track), and Jan faithfully stuck to that. Most of the content was not new to me, but it was really comforting to hear someone else with very similar experiences to my own. In particular, things like how GUI testing is hard and is not a silver bullet; how record & replay is NOT a good thing to do (which drew a somewhat inflammatory tweet from an automation company when I tweeted that!). Something that I have started hearing around the place, and has certainly been murmuring at work is to push more of the automation coding onto the developers. After all they are the coding experts. I agree with this to a degree, but I personally enjoy coding and find it very rewarding doing so, therefore I’d be reluctant to give it up. I think there are some better alternatives such as pairing with a developer. Lastly, Jan mentioned, almost in passing, that we should consider virtualisation for gui testing for covering configuration combinations. On my project we’ve been running our win32/.NET GUI tests in cloud virtualisation for a couple of years now… I really should write about that! After lunch the second keynote of the day was by Lisa Crispin and Janet Gregory,”Myths about Agile Testing, De-Bunked”. It started off well… with the two ladies donning Medusa style head bands whilst they disbanding several myths about agile testing! I got the impression that it was perhaps not as slick as they would have liked, but then Janet was suffering with a very sore throat so kept losing her voice. Nevertheless, the presentation was captivating, and they debunked several myths such as : “Testing is dead”, “Testers must write code”, “Agile teams always deliver faster”. I didn’t take many notes for this because it was being recorded, but unfortunately the recordings have not been posted yet so I’ll write more about this when they are. The TestLab was held during a somewhat free for all time during most of the afternoon. It looked intriguing and proved to be one of the surprising experiences of the conference for me. Run by James Lyndsay and Bart Knaack, it consisted of a number of ‘stations’ that offered different testing problems. I opted for testing a mathematical drawing app call Geogebra, the task being to pair up and exploratory test it. After an allotted time, we discussed issues we’d found and decided if we wanted to continue ‘playing’ to which we all agreed! It was fun! The last track talk of the day was “Developers Exploratory Testing – Raising the bar” by Sigge Birgisson. One of the teams at Red Gate have tried Dev or Team exploratory testing a couple of times, and I was really interested to go to the presentation that prompted that. I was not disappointed! Sigge gave a first class presentation, and not only explained what DET was all about, but also how to go about implementing it. Little tips like calling it a ‘workshop’ rather than ‘testing’ I can really see working! Monday evening saw the presentation of the award for the Most Influential Agile Testing Professional Person go to a much deserved Lisa Crispin. The evening was great, with acrobatics, magic and music. My Takeaway Triple from Day 1:  Some of the cool stuff that was suggested in the GUI Testing talk, we are already doing. I should write about that! Testing is not dead! Perhaps testing will become more of a skill than a specific role, but it is certainly not dead. Team/Developer exploratory testing… seems like a no-brainer assuming you have a team who is willing.  Day 2 – Coming soon…

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >