Search Results

Search found 26798 results on 1072 pages for 'difference between detach attach and restore backup a db'.

Page 56/1072 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • RoR Server won't detach with -d option

    - by Rodrigo
    I had to reinstall a bunch of my port installs the other day and now when I launch my RoR server with "ruby script/server -d -p 3000" the server wont work. I am not seeing any errors in the logs. If I start it with "ruby script/server -p 3000" it works fine. Any ideas of what I might have uninstalled that would cause this behavior?

    Read the article

  • Combat server downtime by duplicating server and re-routing when main server is down

    - by Wasim
    I have a CentOS server which at times either crashes or gets attacked with DDOS. At the moment I have an off site backup which is filled up with 1.7TB of data. I'm currently paying as much for the backup as I am for the server and I was looking for advice from experienced people as to what option is best to proceed from here. Would it be a viable solution to ditch the offsite backup, and instead purchase an additional server which is an exact duplication of the first server. So if the first server is down, users are re-routed to the second server without noticing the first server is even down. This would create an automatic backup of the first server (albeit not offsite) and relinquish the need for the expensive offsite backup. Is the above solution a true solution to pricey backup or is offsite backup absolutely necessary? How would I go about doing this (obviously it's pretty complex so just links to some reading material or the terminology of the procedure would be great)? Appreciate the help and advice.

    Read the article

  • DB Object passing between classes singleton, static or other?

    - by Stephen
    So I'm designing a reporting system at work it's my first project written OOP and I'm stuck on the design choice for the DB class. Obviously I only want to create one instance of the DB class per-session/user and then pass it to each of the classes that need it. What I don't know it what's best practice for implementing this. Currently I have code like the following:- class db { private $user = 'USER'; private $pass = 'PASS'; private $tables = array( 'user','report', 'etc...'); function __construct(){ //SET UP CONNECTION AND TABLES } }; class report{ function __construct ($params = array(), $db, $user) { //Error checking/handling trimed //$db is the database object we created $this->db = $db; //$this->user is the user object for the logged in user $this->user = $user; $this->reportCreate(); } public function setPermission($permissionId = 1) { //Note the $this->db is this the best practise solution? $this->db->permission->find($permissionId) //Note the $this->user is this the best practise solution? $this->user->checkPermission(1) $data=array(); $this->db->reportpermission->insert($data) } };//end report I've been reading about using static classes and have just come across Singletons (though these appear to be passé already?) so what's current best practice for doing this?

    Read the article

  • ??????:???????:Oracle DB?????????????????·???

    - by Yusuke.Yamamoto
    PS???????????????????:Oracle DB?????????????????·????2???????????? ??????????????????????????????????????????????????????????? Togetter - ???????:Oracle DB?????????????????·??? ??????????????????????????·???????????????????????????? ???DBA???????????????????????????????????????? ?????????????????????(@yoshikaw?????????????!)? Oracle?????????????(Part1) - Keep It Simple, Stupid Oracle?????????????(Part2) - Keep It Simple, Stupid ??????????????????????????(????????)? Oracle Database ??????????? Oracle Ace ? ORACLE MASTER Platinum ????????????????????????????????? Oracle?????????????? - ??????????????????????? ????????????????????? oracletech.jp ????????????

    Read the article

  • Can't attach EC2 instance to Network Interface

    - by Ian Warburton
    When trying to attach a network interface, it says... No instances were found for this availability zone. My instance is in us-east-1c and my network interface is in us-east-1b. Is that significant? If so, how do I create the VPC in the same zone and if not then why this error? EDIT: I've re-created the VPC and the Network Interface is now us-east-1c and the EC2 instance is also us-east-1c. Same error message though!

    Read the article

  • Indefinite hang when restoring SQL 2005 database on a SQL 2008 server in EC2

    - by erinloy
    I'm trying to restore a 25 GB database backup taken from a Windows 2003/SQL 2005 machine to a Windows 2008/SQL 2008 machine in the Amazon EC2 cloud, using a .bak file and the SQL Management Studio. SQL Management Studio reports the restore reaches 100% complete, and then just hangs indefinitely (24+ hours) using a lot of CPU, until I restart the SQL Server service. Upon restart, SQL again uses a lot of CPU activity for what seems to be an indefinite amount of time, but the DB never comes online. Here are some details: - I have created two EBS volumes, one for DATA and one for LOGS, and I have set the default directories in SQL Server to the \DATA and \LOG directory on these respective volumes. (I wonder if the issue could be related to this, but the DB is too big to restore on the root drive.) - I have given the SQL Server user group full access to these directories. - The server can create a new empty test DB in these directories just fine, and can backup and restore the test DB. - I have tried both restoring of a .bak file and attaching directly to copies of the original .mdf/.ldf files, and the result is the same in both cases. - Both the .bak restore and the .mdf/.ldf attach occur from/to the EBS volumes. - I've also tried the above via SQL script, and "WITH RECOVERY", with no difference in the result, just less UI. - The backup contains two full text indexes. - I have to use "WITH MOVE" for most of the files in the backup. - There's nothing wrong with the backup or .mdf/.ldf files, as this works just fine on a Windows 2003/SQL 2005 machine in the Amazon EC2, but not Windows 2008/SQL 2008. - The DB is NOT marked as "Restoring" in the SQL Management Studio - it is just listed as a normal database, but throws errors when I try to do anything with it (expand the object browser tree, view properties, etc.) Any ideas?

    Read the article

  • SQL Server Restore from Backup, Just primary File Group

    - by bladefist
    Thankfully, this question is just a what-if, and I am not in an emergency right now. But I have created a file group in my database (sql server 2008), and moved some massive data tables over to it. Leaving my websites central tables in the Primary file group. In the event of a restore, can I restore just the primary file group, and have a working database? Or do I have to restore both file groups? I don't want my site down for ages while it restores the 2nd file group.

    Read the article

  • unable to re-attach screen session on freebsd

    - by Michael
    I have a screen session that I am unable to re-attach to. I have tried kill -CHLD 6859, with zero success. Is there anything else that I can try to get this session re-attached q4# screen -ls No Sockets found in /tmp/screens/S-root. q4# ls -la /tmp/screens/S-root/ total 8 drwx------ 2 root wheel 512 May 26 12:52 . drwxr-xr-x 4 root wheel 512 Feb 26 2013 .. prwx------ 1 root wheel 0 May 26 10:14 6859.pts-0.q4 q4# ps uax 6859 USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND root 6859 0.0 1.2 84732 50444 ?? Ss 2Jan13 34:06.71 screen -h 9999 q4# screen -r There is no screen to be resumed. q4# whoami root q4#

    Read the article

  • Restore "lost" user after Active Directory removal?

    - by Zulgrib
    Is it possible to restore lost users after Active Directory unistallation ? (I forgot to switch users to local users) The computer run Windows Server 2008 R2 Entreprise, and all the registry linked to the user i want to restore seems to still be there, user's folder is still on the harddrive, and useraccount2 still show the user (But flagged as unknown user) Some folders still have rigts set to this lost user, and even the local default Admin account cannot open/delete the folder. (But the real problem here is to find how to recover users account, the folder can be deleted an other way) All users i want te restore was originaly local users, converted to domain users after Active Directory installation. I think that if i can change user's sid (choosing the sid manually) i'll be able to easily recover rights on folders Regards

    Read the article

  • Fixing restore active desktop on windows xp

    - by Rachel Nark
    I've already tried this: http://answers.microsoft.com/en-us/windows/forum/windows_xp-desktop/windows-xp-will-not-restore-my-active-desktop/f664bfe4-0acd-4b11-8918-eb779bb2cc07 had no luck. All I know is that the computer crashed from a power outage. I've tried clicking the restore button and rebooting. Nothing. What else is there to try? Would like to have the desktop back. It powers on fine. I can login and use windows just fine. Just you get that annoying restore active desktop screen.

    Read the article

  • attach two hdds to a computer having 500W smps

    - by Gaurav Sharma
    I have a desktop system installed with a 250 GB hdd (seagate sata II) this system is having a power supply of 500W (not sure if it is 650W but not more than that) the power supply is a local brand. Will it be safe to attach a second 250 GB sata II hdd on the same system. Safe in the sense that the system may not fall short of power at any time. My system's config is as follows Core2Duo processor Mercury cabinet having an additional fan (small one) sata dvd writer 52X windows xp sp 2 ASUS motherboard (intel G965 express chipset) If the above specified power supply is not sufficient for above configuration of system then please suggest the appropriate power supply (including watts)

    Read the article

  • SQL – Difference Between INNER JOIN and JOIN

    - by Pinal Dave
    Here is the follow up question to my earlier question SQL – Difference between != and Operator <> used for NOT EQUAL TO Operation. There was a pretty good discussion about this subject earlier and lots of people participated with their opinion. Though the answer was very simple but the conversation was indeed delightful and was indeed very informative. In this blog post I have another following up question to all of you. What is the difference between INNER JOIN and JOIN? If you are working with database you will find developers use above both the kinds of the joins in their SQL Queries. Here is the quick example of the same. Query using INNER JOIN SELECT * FROM Table1 INNER JOIN  Table2 ON Table1.Col1 = Table2.Col1 Query using JOIN SELECT * FROM Table1 JOIN  Table2 ON Table1.Col1 = Table2.Col1 The question is what is the difference between above two syntax. Here is the answer – They are equal to each other. There is absolutely no difference between them. They are equal in performance as well as implementation. JOIN is actually shorter version of INNER JOIN. Personally I prefer to write INNER JOIN because it is much cleaner to read and it avoids any confusion if there is related to JOIN. For example if users had written INNER JOIN instead of JOIN there would have been no confusion in mind and hence there was no need to have original question. Here is the question back to you - Which one of the following syntax do you use when you are inner joining two tables – INNER JOIN or JOIN? and Why? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

  • Duplicity can't connect to CloudFiles "Network is unreachable"

    - by jwandborg
    Whenever I click "Backup now" in the Backup GUI, the smaller "Back Up" window opends, and after a while I get the following error message: Traceback (most recent call last): File "/usr/bin/duplicity", line 1359, in with_tempdir(main) File "/usr/bin/duplicity", line 1342, in with_tempdir fn() File "/usr/bin/duplicity", line 1202, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File "/usr/lib/python2.7/dist-packages/duplicity/commandline.py", line 942, in ProcessCommandLine globals.backend = backend.get_backend(args[0]) File "/usr/lib/python2.7/dist-packages/duplicity/backend.py", line 156, in get_backend return _backends[pu.scheme](pu) File "/usr/lib/python2.7/dist-packages/duplicity/backends/cloudfilesbackend.py", line 70, in __init__ self.container = conn.create_container(container) File "/usr/lib/python2.7/dist-packages/cloudfiles/connection.py", line 250, in create_container response = self.make_request('PUT', [container_name]) File "/usr/lib/python2.7/dist-packages/cloudfiles/connection.py", line 189, in make_request response = retry_request() File "/usr/lib/python2.7/dist-packages/cloudfiles/connection.py", line 182, in retry_request self.connection.request(method, path, data, headers) File "/usr/lib/python2.7/httplib.py", line 955, in request self._send_request(method, url, body, headers) File "/usr/lib/python2.7/httplib.py", line 989, in _send_request self.endheaders(body) File "/usr/lib/python2.7/httplib.py", line 951, in endheaders self._send_output(message_body) File "/usr/lib/python2.7/httplib.py", line 811, in _send_output self.send(msg) File "/usr/lib/python2.7/httplib.py", line 773, in send self.connect() File "/usr/lib/python2.7/httplib.py", line 1154, in connect self.timeout, self.source_address) File "/usr/lib/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable I use Rackspace CloudFiles as a storage backend, last backup was 3 days ago (successful. I have not changed any settings since then.

    Read the article

  • Questions about norton ghost

    - by Nrew
    I have used norton ghost to backup my hard drive. And it came up with a .v2i file. Will I be able to use this backup from my pc to my laptop? Can I use this backup to restore my dual boot pc back in shape. If the mbr is destroyed/damaged? Can I use this backup to restore my os and applications on the same machine if the machine's hard drive is reformatted?

    Read the article

  • rsnapshot preexec

    - by Zulakis
    I am mounting my remote backup volume using a rsnapshot cmd_preexec script. If the /mnt/backup directory doesn't exist when starting rsnapshot i get this error: ERROR: /mnt/backup does not exist. If the directory exists and the preexec mounting fails, it does not stop rsnapshot resulting in the backup being backed up on the completely wrong server... What should I do about this? Edit: I know that I could use a wrapper-script, but I don't want to do this..

    Read the article

  • File copying software to do this kind of work... in Windows 7 32 bit

    - by Senthil
    I need a software (Windows 7 32bit) to help me in this process: I have my documents, music, video clips, movies, pictures in my hard disk. These will not be scattered around the system. But will be inside C:\Senthil\ At the end of every week, I want to plug in an external hard disk and run a software that should make sure whatever is inside C:\Senthil\ is also present in the external disk. Files deleted from C:\Senthil\ should be deleted there, and new files should be copied etc... at the end of the process, every bit inside the source folder in my internal disk should be inside my external disk. A couple of important requirements and points: I do NOT need multiple versions or historic versions. I don't need the previous versions of my files. I only want the latest copy to be present in my "backup". Incremental backup makes sense. If files were not touched since the last backup, it need not copy. The size of my folder will run into GBs and in a year or two will go into TBs. But I will make sure the size of the external HDD is equal to or bigger than my source folder. I do not want it to run automatically because when I accidentally delete a file in my source, it will delete the one in the backup (I know this is why we have versioning facilities). I just want to be able to run it manually so that I am in control of when the backup is made and what is backed up and I should be able to pick something from the backup and restore it to the source folder in the above situation. Is there any software that will let me do exactly this? I don't want any other "smart" facility of the software to interfere with this process. I know what I want and the software can keep its smartness to itself :D The main reason I am asking this question is, I am a software developer and I can write this software myself. But I am a little constrained by time at the moment and I want to know if there is an existing program that can do this. Kindly don't worry about earthquakes or fire or snowstorms and bring up the "in case of a natural disaster your backup will also be in the damage zone and will be lost" argument because: I will have bigger things to worry about than my holiday memories. I don't think I will digitally store any life-ruining documents. This backup is only to avoid the inconvenience of obtaining a new copy of stuff that I have. Not to protect them against the end of the world. I am more worried about power surges in my area frying my system, hard disk failure, children who merrily hit Delete or teens who hit Shift + Delete or myself getting a little careless at times! In short: Is there a file/folder syncing software that listens to what I say and doesn't try to act smart? Please forgive me if I sound arrogant :)

    Read the article

  • Backing up a 22 GB MySQL database daily

    - by unknown (yahoo)
    Right now I am able to do the backup using mysqldump. But I have to take down the web server AND it takes around 5 minutes to do the backup. If I don't take down the web server, it takes forever and never finishes + the website becomes inaccessible during the backup. Is there a quicker/better way to backup my 22 GB and growing database? All the tables are MyISAM.

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >