Search Results

Search found 30724 results on 1229 pages for 'backup solution'.

Page 183/1229 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Migrating Magento Concern

    - by Pankaj Upadhyay
    We have a Magento 1.5.0.1 store running at a hosting provider. Now, we need to migrate the same from that server to a new hosting provider. I had talk with a technical guy from the new hosting provider who told me to do following things. Go into the cPanel Backup Wizard . Make a FULL BACKUP and download the zip file Then upload that zip file on their server in my root folder. Then tell them and they will do the restore. My Concern :- Will everything work as expected. What about the connectionstrings and database and all. Will database be automatically created and work the same. Also, somewhere I read that ver 1.5.0.1 used older type of database which might not work on new MySQLs. Can this too have any impact. Should i proceed in the same manner or I need to take care of some additional things to ensure smooth running.

    Read the article

  • Can SysPrep (or anything else) be used to make a Win XP partition from another computer bootable?

    - by chris5gd
    I've used Paragon Backup and Recovery Free as recommended to me in my other question, to backup my C: (Win XP) and D: (installed apps) partitions. Before taking the rather scary step of breaking the RAID 0 array on which it's currently installed, and restoring to one of the individual drives, I'd quite like to test the restorability of the imaged partitions. I've restored them on to a spare disk in another computer, which of course won't boot from them in their current state. Is it possible to use SysPrep (or another tool) on the restored partitions, to make it bootable?

    Read the article

  • limit the speed of writing files to NFS

    - by xgwang
    CentOS 5.6 NFS is mounted on the server for backup disk space. When the backup job started, it could reach 80MB/s and we really do not expect it took so much bandwidth. So i need to find a way to limit the speed of writing to NFS. I tried rsync with --bwlimit=5000. However, it did limit the reading speed, but the accumulated data still was written at 80MB/s, and no writing activities for seconds. Is there any way to limit the writing speed of NFS?

    Read the article

  • VB app to web service

    - by brandon
    I know very little about web service but I assumed it would be the solution I was looking for. Basically I made an application in VB that I want to be ubiquitous for a lack of a better word. I need it to receive requests from multiple users and respond all at once. I was told "technically if you write a webservice you can provide as many results back to users as are connected." Maybe there is another solution for me that will give me the results I want. Here is an example of what I'm trying to do. Lets say I make an application in VB that does math. I now make a website. My website allows for a person to input 1 + 1 they click submit and my website then connects to my VB application running on my server listening for a request. It accepts the request from my website, and then it solves the math problem and returns the answer back to the website "1 + 1 = 2" That is only an example of the type of thing I need. My problem is that I can't have multiple people visiting my website all connecting to that same application running on my server so somehow I need the application to be where it can be accessed by multiple users. I was told a web service would be the answer but if there is another solution I'd like to know. If the only solution is a web service, then how can I manage to either convert the VB app to a web service? Can I have to convert the app to asp.net or some other language? Is there an easier option?

    Read the article

  • GNU Makefile: multiple outputs from single rule + preventing intermediate files from being deleted

    - by makesaurus
    This is sort of a continuation of question from link text. The problem is that there is a rule generating multiple outputs from a single input, and the command is time-consuming so we would prefer to avoid recomputation. Now there is an additional twist, that we want to keep files from being deleted as intermediate files, and rules involve wildcards to allow for parameters. The solution suggested was that we set up the following rule: file-a.out: program file.in ./program file.in file-a.out file-b.out file-c.out file-b.out: file-a.out @ file-c.out: file-b.out @ Then, calling make file-c.out creates both and we avoid issues with running make in parallel with -j switch. All fine so far. The problem is the following. Because the above solution sets up a chain in the DAG, make considers it differently; the files file-a.out and file-b.out are treated as intermediate files, and they by default get deleted as unnecessary as soon as file-c.out is ready. A way of avoiding that was mentioned somewhere here, and consists of adding file-a.out and file-b.out as dependencies of a target .SECONDARY, which keeps them from being deleted. Unfortunately, this does not solve my case because my rules use wildcard patters; specifically, my rules look more like this: file-a-%.out: program file.in ./program $* file.in file-a-$*.out file-b-$*.out file-c-$*.out file-b-%.out: file-a-%.out @ file-c-%.out: file-b-%.out @ so that one can pass a parameter that gets included in the file name, for example by running make file-c-12.out The solution that make documentation suggests is to add these as implicit rules to the list of dependencies of .PRECIOUS, thus keeping these files from being deleted. The solution with .PRECIOUS works, but it also prevents these files from being deleted when a rule fails and files are incomplete. Is there any other way to make this work?

    Read the article

  • How to combine two separate unrelated Git repositories into one with single history timeline

    - by Antony
    I have two unrelated (not sharing any ancestor check in) Git repositories, one is a super repository which consists a number of smaller projects (Lets call it repository A). Another one is just a makeshift local Git repository for a smaller project (lets call it repository B). Graphically, it would look like this A0-B0-C0-D0-E0-F0-G0-HEAD (repo A) A0-B0-C0-D0-E0-F0-G0-HEAD (remote/master bare repo pulled & pushed from repo A) A1-B1-C1-D1-E1-HEAD (repo B) Ideally, I would really like to merge repo B into repo A with a single history timeline. So it would appear that I originally started project in repo A. Graphically, this would be the ideal end result A0-A1-B1-B0-D1-C0-D0-E0-F0-G0-E1-H(from repo B)-HEAD (new repo A) A0-A1-B1-B0-D1-C0-D0-E0-F0-G0-E1-H(from repo B)-HEAD (remote/master bare repo pulled & pushed from repo A) I have been doing some reading with submodules and subtree (Pro Git is a pretty good book by the way), but both of them seem to cater solution towards maintaining two separate branch with sub module being able to pull changes from upstream and subtree being slightly less headache. Both solution require additional and specialized git commands to handle check ins and sync between master and sub tree/module branch. Both solution also result in multiple time-lines (with --squash you even get 3 timelines with subtree). The closest solution from SO seems to talk about "graft", but is that really it? The goal is to have a single unified repository where I can pull/push check-ins, so that there are no more repo B, just repo A in the end.

    Read the article

  • Visual Studio internal project references not always working

    - by Chris
    I am using Visual Studio and a solution with 10 or so projects in (mostly VB, some C#) which have various dependencies set up. Usually when I compile the solution it works fine. Occasionally when I do it I get a build error saying that one of the projects referenced is the wrong version (I think always the same one, possibly may be two that can cause problems). In this case going to the solution explorer and right clicking on the mentioned project and saying "rebuild" followed by another full build makes it work fine. I assume there is something set up wrong somewhere but I didn't set up the solution myself initially and a quick look through doesn't show anything immediately wrong. It feels like there is some kind of race condition, that VS is internally setting the version number of the project it needs before that project has been rebuilt and thus gets it wrong or something like that but I'm sure VS should handle all this sort of thing properly. Can anybody please suggest places that I could check for whether this has been correctly set up... And I should finally note that since I don't have reliable repro of this I may not be able to respond to questions too quickly. For example the obvious one of "Could you give the exact error message" will have to wait since I didn't think to copy it this morning, it was only after I cleared it up with the above steps that I thought to post here. Similarly any solutions may take a while to confirm.

    Read the article

  • Windows Home Server restore causes computer to be removed from the domain?

    - by unknown (google)
    I restored my Dell M4400 that is a company laptop, and now I get an error when I try to log on and am connected to our corporate network, which says that the domain controller could not be found or that the computer is not part of the domain. Everyone else can log on, so it seems my computer is no longer part of the domain, even though it thinks it is per the settings. One thing of note: my computer crashed on 1/14/10, but I restored from a backup that was made on 12/20/09. So I am not sure if that made a difference? Also, I tried running "gpupdate" to update my group policy, but that did not seem to help. Any ideas? Seems like a bit of a flaw in the backup system for computers that are part of a domain. I guess I wanted to hear from someone with more knowledge about how a computer is recognized as part of a domain to know if this should be expected when doing a restore or if I should file a trouble ticket.

    Read the article

  • Exceptions from automongobackup, yet script completes

    - by chakram88
    I am using automongobackup to, well, automate the backups of mongodb. output from the script (to STDERR) has the following exceptions (but the backup completes, and the dump files are created) ###### WARNING ###### STDERR written to during mongodump execution. The backup probably succeeded, as mongodump sometimes writes to STDERR, but you may wish to scan the error log below: exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: HostAndPort: bad port # exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed I know that the Host & Port are correct. If I run mongodump --host=127.0.0.1:27017 --journal (which is the effective command from automongobackup based on the options set and my reading of the src code) everything runs clean without any error reporting and the dump files are created as expected. Why would automongobackup report connection errors, even tho it does create the dump files, yet a straight call to mongodump does not? Debian 6.0 Lenny (from Linode image: Latest 3.2 (3.2.1-x86_64-linode23)) AutoMongoBackup VER 0.9 mongodb v 2.0.2

    Read the article

  • Error code 2503 - Cannot install software on Windows 7 (64Bit)

    - by SixfootJames
    A short while ago, I had my hard drive die on me and at the same time my 1Tb backup drive! I took it back to the guy I bought the PC from and although the backup drive could not be recovered, he managed to get my machine working again by making a minor change in the BIOS which then got it out of that continuous loop it found itself in after multiple BSOD episodes. Everything seems to be working fine but yesterday when I tried to save something from Google Chrome, I got and insufficient permission problem and when I try to install software, I get an error of 2503. I have already followed the suggestions here but none of this worked for me. Any suggestions would be appreciated. EDIT: This started happening after I tried running a number of tests to get the machine working, including a previous restore point.

    Read the article

  • Recover deleted files on windows 2008 file server

    - by aniga
    We have recently been hit by a weird virus which made all files and folders a system files/folders and also it hid all files and folders par some weird ones it created including: ..exe porn.exe secret.exe password.exe etc We have managed to restore the files with attrib command to unhide and unmark them as system files however we have noticed that we are missing some 4 to 5 folders of which (based on my luck) 2 of them are the two most important client we have. I am not sure if these files were deleted by the worm/virus or by my colleagues who are not owning up to them but the files are now gone. Worst of all, we do not have any backup what so ever (Yes I know, we should not have done that but it is a lesson learned and since last night we have created two forms of backup systems one to external device and one on the cloud, but I doubt any of that will help us now) We have 1 Windows 2008 File server and 4 client computers based on Windows 2007. I would be grateful if anyone can help us on how we can recover from this disaster which could potentially put us out of business.

    Read the article

  • Data transfer is extrem slow after partitioning extern usb drive

    - by user125912
    I bought an extern usb 3.0 drive with 500 gb capacity. OS is Windows 7. I use it with an usb 2.0 slot, no prob. Initially I used it without making several partitions and it was fast as hell. Then I had the great idea to make partitions, one for programs, one for data and one for backup. I chose the free EASEUS Partition Master 9.1.1. and ended up with these partitions: F:Apps, primary, NTFS, 100gb H:Data, logic, NTFS, 250gb B:Backup, logic, NTFS, 150gb THE PROBLEM: When I copy files from C: to F: I get a transfer rate of about 100 KB/S ! When I copy files from C: to H: I get a transfer rate of about 4 MB/S ! thats all muuuch to slow, slower then before. What can I do to speed the shit up? Thanks in advance!

    Read the article

  • Snapshot/Save GPU Drivers

    - by ashes999
    Since I'm running XP/32-bit, my GPU drivers are quite fragile. I've spent several hours trying to back up and restore from old versions, on at least two separate occasions. Writing down the device drivers is not enough. In the short term, I would like to somehow save, zip, backup, snapshot, or something so that if I need to reinstall my OS in the short-term, I have a reliable way to get the drivers. ATI's website doesn't have the install kit anymore, and I don't have it saved; I googled, but didn't find the exact same version. How can I backup/save my drivers so that I can reinstall them later?

    Read the article

  • Nagios service active only when other service is failing

    - by Laimoncijus
    Is is possible to define service to be active only the times while other service is failing? Consider following example: 2 hosts available: HostA (primary) and HostB (backup). Nagios service, which monitors amount of active connections to the host: gives OK when amount of connections to host 0 gives FAILURE when amount of connections to host = 0 If setup nagios service to monitor both hosts: HostA and HostB - it will give me OK for HostA (while it is primary and all connections normally goes to it) and FAIL for HostB (while it is backup and will receive no connections while HostA is alive). Can I make the nagios service for HostB somehow depend on sevice of HostA and give no failures (or maybe be inactive) up to the moment the service of HostA starts failing?

    Read the article

  • How to efficiently restore Library folder partially deleted on OS X

    - by flow
    I am using OS X Lion, and trying to delete some files I did accidentally (from home directoy): rm -fr Library I realized about this some 15 seconds later and did killall rm Some folders have been deleted, of course, inside "Library". Now the system seems to be ok, but I fear what will happen in case of reboot. I have a Time Machine backup from 5 days ago. I wonder if it would be a good solution, just to copy whole "Library" folder from my home directory from backup and replace this one. Or, what do you think would be the best approach? PS: In order to restore just deleted directories inside "Library", in which order does "rm" start to delete directories, alphabetically?

    Read the article

  • Mercurial repository narrow clone?

    - by Berry Langerak
    Hi. I'm currently in the process of moving from Subversion to Mercurial, and I have to say I don't regret that decision. However, when trying to convert my project, I ran into a problem of Mercurial, which I can't seem to get fixed. I have two distinct projects: one is a framework, and the other is an application that relies on that framework. Here's what the repositories look like: The Framework repository: docs/ deploy/ lib/ tests/ The Application repository: application/ config/ lib/ tests/ www/ What I'd like is for the application's lib directory to contain a copy of the frameworks' lib/ directory. I used to do this using svn:externals. Now, I am aware that Mercurial supports the concept of subrepositories, but that doesn't seem like the "correct" solution, as it doesn't actually pull in the lib/ directory like I wanted, as you'll still have to pull and push changes manually. That, plus once you clone the framework repository, you'll get all of it, not just the lib/ directory. I only need the lib/ directory, not the tests, or the docs. Now, I thought up two different solutions to this problem, but I wonder which is the best. The first solution would be to clone the framework in a different directory altogether and create symlink in the application's lib/ directory which points to the framework's lib/ directory. Putting the symlink in .hgignore should make sure all is well, I think? That means that you could edit the frameworks code, and commit that, and you could edit the application's code and commit that, too. The other option is to have multiple repositories. The framework gets pulled as a whole, which means you'll get the docs/, deploy/, test/ etc. directories, which are not needed for usage of the framework. I thought maybe creating a repository purely for the library might be a solution, although I sincerely doubt it, as the Unit Tests are very dependant upon the library itself. Does anyone know a decent solution for this problem?

    Read the article

  • Transaction log is full and does not free up space

    - by titanium
    Hi, I have a database in SQL Server 2005 whose transaction log becomes full. It is using snapshot replication. I noticed the transaction log is not freeing up space. So I created an additional transaction log. Three days has passed and this first transaction log is still full. I performed a full database backup and transaction backup. Then I tried to shrink the transaction log but the shrink failed. Can anyone advise why shrinking transaction log is failing? ANy other recommendation on how to resolve the problem?

    Read the article

  • Generate regular expression to match strings from the list A, but not from list B

    - by Vlad
    I have two lists of strings ListA and ListB. I need to generate a regular expression that will match all strings in ListA and will not match any string in ListB. The strings could contain any combination of characters, numbers and punctuation. If a string appears on ListA it is guaranteed that it will not be in the ListB. If a string is not in either of these two lists I don't care what the result of the matching should be. The lists typically contain thousands of strings, and strings are fairly similar to each other. I know the trivial answer to this question, which is just generate a regular expression of the form (Str1)|(Str2)|(Str3) where StrN is the string from ListA. But I am looking for a more efficient way to do this. Ideal solution would be some sort of tool that will take two lists and generate a Java regular expression for this. Update 1: By "efficient", I mean to generate expression that is shorter than trivial solution. The ideal algorithm would generate the shorted possible expression. Here are some examples. ListA = { C10 , C15, C195 } ListB = { Bob, Billy } The ideal expression would be /^C1.+$/ Another example, note the third element of ListB ListA = { C10 , C15, C195 } ListB = { Bob, Billy, C25 } The ideal expression is /^C[^2]{1}.+$/ The last example ListA = { A , D ,E , F , H } ListB = { B , C , G , I } The ideal expression is the same as trivial solution which is /^(A|D|E|F|H)$/ Also, I am not looking for the ideal solution, anything better than trivial would help. I was thinking along the lines of generating the list of trivial solutions, and then try to merge the common substrings while watching that we don't wander into ListB territory. *Update 2: I am not particularly worried about the time it takes to generate the RegEx, anything under 10 minutes on the modern machine is acceptable

    Read the article

  • How can I boot a vm on Hyper-V 2012 when it has a virtual hard-drive missing?

    - by Zone12
    We have a Hyper-V 2012 server with 8 VM's on. We have attached extra virtual hard-drives to each of the computers to store backups on. These drives are stored on a NAS. After a power failure, we tried to boot the VM's and found that they couldn't be booted without the attached backup drives. We couldn't boot the NAS at that point and so we had to remove all the extra drives manually, boot the VM's and re-attach the drives at a later date when we got the NAS back up and running. These backup drives are non-essential to the running of the system. I would like to know if there is a way to boot a VM on Hyper-V 2012 with some of the hard-drives (scsi) missing so that we can recover automatically from a power failure.

    Read the article

  • How do I daemonize an arbitrary script in unix?

    - by dreeves
    I'd like a daemonizer that can turn an arbitrary, generic script or command into a daemon. There are two common cases I'd like to deal with: I have a script that should run forever. If it ever dies (or on reboot), restart it. Don't let there ever be two copies running at once (detect if a copy is already running and don't launch it in that case). I have a simple script or command line command that I'd like to keep executing repeatedly forever (with a short pause between runs). Again, don't allow two copies of the script to ever be running at once. Of course it's trivial to write a "while(true)" loop around the script in case 2 and then apply a solution for case 1, but a more general solution will just solve case 2 directly since that applies to the script in case 1 as well (you may just want a shorter or no pause if the script is not intended to ever die (of course if the script really does never die then the pause doesn't actually matter)). Note that the solution should not involve, say, adding file-locking code or PID recording to the existing scripts. More specifically, I'd like a program "daemonize" that I can run like % daemonize myscript arg1 arg2 or, for example, % daemonize 'echo `date` >> /tmp/times.txt' which would keep a growing list of dates appended to times.txt. (Note that if the argument(s) to daemonize is a script that runs forever as in case 1 above, then daemonize will still do the right thing, restarting it when necessary.) I could then put a command like above in my .login and/or cron it hourly or minutely (depending on how worried I was about it dying unexpectedly). NB: The daemonize script will need to remember the command string it is daemonizing so that if the same command string is daemonized again it does not launch a second copy. Also, the solution should ideally work on both OS X and linux but solutions for one or the other are welcome. (If I'm thinking of this all wrong or there are quick-and-dirty partial solutions, I'd love to hear that too.)

    Read the article

  • Reading S.M.A.R.T. statistics on an ESXi?

    - by leeand00
    Is it possible to read S.M.A.R.T. statistics on harddrive in an VMWare ESXi? When I did a backup last night I received an error message that didn't really seem to indicate if the error came from the local drive I was backing up to, or the remote ESXi Virtual Machine's E:\ drive I was backing up from caused the error. When this happened I ran chkdsk on the local drive and on the remote virtual drive. It seems like it worked, I'm no longer getting the error, but if it is something serious, I'd like to know about it before the drive fails. I've already hooked up the backup drive to my system so I can read the the S.M.A.R.T. statistics, but I have no idea how one could do this on a ESXi.

    Read the article

  • How can we recover/restore lost/overwritten data in our MSSQL 2008 table?

    - by TeTe
    I am in serious trouble and I am seeking professional advices here. We are using MSSQL server 2008. We removed primary key, replaced exiting data with new data resulted losing our critical business data in its child tables on MSSQL Server. It was completely human mistake and we didn't have disk failure. 1) The last backup file was a month ago which means it is useless. 2) We created Maintenance Plans to backup our database at 12AM everyday but those files are nohwere to be found 3) A friend of mine said we can recover from Transaction Logs. When I go to TaskRestore Transaction log is dimmed/disabled. 4) I checked ManagementMaintenance Plans. I can't find any restored point there. It seems that our maintenance plan hasn't been working. Is there any third party tool to recover lost/overwritten data from MSSQL table? Thanks a lot.

    Read the article

  • How to efficiently get all instances from deeper level in Cocoa model?

    - by Johan Kool
    In my Cocoa Mac app I have an instance A which contains an unordered set of instances B which in turn has an ordered set of instances C. An instance of C can only be in one instance B and B only in one A.   I would like to have an unordered set of all instances C available on instance A. I could enumerate over all instances B each time, but that seems expensive for something I need to do often. However, I am a bit worried that keeping track of instances C in A could become cumbersome and be the cause of  inconsistencies, for example if an instance C gets removed from B but not from A.  Solution 1 Use a NSMutableSet in A and add or remove C instances whenever I do the same operation in B.  Solution 2 Use a weak referenced NSHashTable in A. When deleting a C from B, it should disappear for A as well.  Solution 3 Use key value observing in A to keep track of changes in B, and update a NSMutableSet in A accordingly.  Solution 4 Simply iterate over all instances B to create the set whenever I need it.   Which way is best? Are there any other approaches that I missed?  NB I don't and won't use CoreData for this app.

    Read the article

  • rsync stuck with the --checksum option

    - by billc.cn
    I use back-in-time to backup my Linux installation. It serves as an advanced wrapper for the rsync command. Today I tried to add /var/log to the list of folders to be backed up and it caused some serious performance problems. The job seems to stuck on a particular file and the CPU usage of the rsync parent process reaches 100%. I then used lsof to see which file caused the problem and it seems to be the /var/log directory. I did some googling and some experiments with the different rsync options and found --checksum to be the offender. Without the parameter, an incremental backup finishes properly in minutes. With it, the process will stuck when rsync tries to sync a constantly changing log file. This kind of make sense, but it still seems to be a bug to me. Am I using the option correctly? Is there a workaround for this?

    Read the article

  • How to set the width of a cell in a UITableView in grouped style

    - by Tomen
    I have been working on this for about 2 days, so i thought i share my learnings with you. The question is: Is it possible to make the width of a cell in a grouped UITableView smaller? The answer is: No. But there are two ways you can get around this problem. Solution #1: A thinner table It is possible to change the frame of the tableView, so that the table will be smaller. This will result in UITableView rendering the cell inside with the reduced width. A solution for this can look like this: -(void)viewWillAppear:(BOOL)animated { CGFloat tableBorderLeft = 20; CGFloat tableBorderRight = 20; CGRect tableRect = self.view.frame; tableRect.origin.x += tableBorderLeft; // make the table begin a few pixels right from its origin tableRect.size.width -= tableBorderLeft + tableBorderRight // reduce the width of the table tableView.frame = tableRect; } Solution #2: Having cells rendered by images This solution is described here: http://cocoawithlove.com/2009/04/easy-custom-uitableview-drawing.html I hope this information is helpful to you. It took me about 2 days to try a lot of possibilities. This is what was left.

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >