Search Results

Search found 30724 results on 1229 pages for 'backup solution'.

Page 180/1229 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • Using AutoMySQLBackup on Rackspace Cloud

    - by xref
    Since Rackspace Cloud only allows FTP access it makes using AutoMySQLBackup a little trickier, and while it is at least creating DB dumps I get errors in the backup log: ###### WARNING ###### Errors reported during AutoMySQLBackup execution.. Backup failed Error log below.. .../backups/automysqlbackup: line 1791: /usr/bin/find: Permission denied .../backups/automysqlbackup: line 1855: /usr/bin/find: Permission denied .../backups/automysqlbackup: line 803: /usr/bin/find: Permission denied .../backups/automysqlbackup: line 1972: /usr/bin/du: Permission denied Since files are being created I'm assuming the find command failing has to do with actually rotating out and deleting the old backups? Line 803: find "${CONFIG_backup_dir}/${subfolder}${subsubfolder}" -mtime +"${rotation}" -type f -exec rm {} \; Any ideas for alternatives?

    Read the article

  • Why is scp not overwriting my destination file?

    - by Noli
    I'm trying to back up a file via the command scp /tmp/backup.tar.gz hostname:/home/user/backup.tar.gz When I run it, the scp progress bar shows up and it looks like its transferring the file, however when I log into the destination server to check the file, the timestamp and filesize haven't changed from the older version, so it looks like scp didn't overwrite the old file at all. It only sees to work when I manually delete the file from the destination server. I'm running ubuntu, and this is happening on two servers: one cygwin ssh, and one fedora core 3. Anyone have any idea why this is happening? I thought scp would ONLY overwrite existing files.. Thanks

    Read the article

  • Efficient Method for Preventing Hotlinking via .htaccess

    - by Michael Robinson
    I need to confirm something before I go accuse someone of ... well I'd rather not say. The problem: We allow users to upload images and embed them within text on our site. In the past we allowed users to hotlink to our images as well, but due to server load we unfortunately had to stop this. Current "solution": The method the programmer used to solve our "too many connections" issue was to rename the file that receives and processes image requests (image_request.php) to image_request2.php, and replace the contents of the original with <?php header("HTTP/1.1 500 Internal Server Error") ; ?> Obviously this has caused all images with their src attribute pointing to the original image_request.php to be broken, and is also the wrong code to be sending in this case. Proposed solution: I feel a more elegant solution would be: In .htaccess If the request is for image_request.php Check referrer If referrer is not our site, send the appropriate header If referrer is our site, proceed to image_request.php and process image request What I would like to know is: Compared to simply returning a 500 for each request to image_request.php: How much more load would be incurred if we were to use my proposed alternative solution outlined above? Is there a better way to do this? Our main concern is that the site stays up. I am not willing to agree that breaking all internally linked images is the best / only way to solve this. I refuse to tell our users that because of something WE changed they must now manually change the embed code in all their previously uploaded content.

    Read the article

  • No free disk space ;[

    - by skomak
    Hi I have weird situation because Linux df command says that there is no free disk space [root@backup cache]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 72G 70G 0 100% / /dev/sda1 190M 11M 170M 7% /boot tmpfs 248M 0 248M 0% /dev/shm but du -sh /* says [root@backup cache]# du -sh /* 4.0K /bacula-restores 7.4M /bin 5.4M /boot 3.6T /data 116K /dev 55M /etc 204K /home 76M /lib 16K /lost+found 12K /media 0 /misc 16K /mnt 8.0K /mount 0 /net 8.0K /opt 0 /proc 2.3G /root 32M /sbin 8.0K /selinux 168K /share 8.0K /srv 0 /sys 361M /test 20K /tmp 3.2G /usr 1.5G /var Could you tell me where is a problem? Where is my space? I can't figure it out :(

    Read the article

  • Harddrive in the freezer ever work for you?

    - by Stefan Thyberg
    Once upon a time, my little 10 GB drive in my webserver failed and of course I had no backup, teaching me to immediately set up an automatic backup job afterwards. Anyhow, this drive refused to start and as a last-ditch effort I put it in a plastic bag and put it in the freezer overnight, since I had heard somewhere that it might work and I really didn't have any other options. The next day I take it out, immediately plug it in outside the case and lo and behold, the drive works long enough for me to copy my data off it. Have you ever had a similar experience with this method?

    Read the article

  • How to go about scheduling a task in windows 7 to change wireless connection

    - by Skindeep2366
    This may or not be something that can be done. I cannot find anything on the wireless connection manager built into windows 7 let alone methods for passing params into it. Problem is as follows: I have 2 wireless routers. One provides internet access, the other provides sole access to the local network. Every day at 4am the main system creates a backup in 2 locations. One is a External usb drive, the other is a location on the network. This is all cool if it is remembered to change over to the local network router before leaving. But if it is forgotten the roof will collapse, the walls will burn, and I will be... well you get the idea. Solution: there is already a custom event that fires a automated backup program at 4am everyday. I need someway to force the wireless network to use the correct connection at say 3:58am everyday. Any ideas????

    Read the article

  • 284 GiB of data, 217.4 GiB of space

    - by Malfist
    I want to reinstall my OS, but I don't have the hard drive space to backup any more (I have a RAID 1 array, so I haven't done it for a while). In my /home I have 284.8 GiB of data, and I have a spare 250 GB (or 217.4 GiB) hard drive that I've been using for backup. What type of compression algorithm (if any) is capable of this type of compression? I don't care about the time, I have a quad core though, so something that utilizes all 4 cores would be great. I have tried 7zip with no success. Ran on one core for two days and failed because of lack of space. Any ideas?

    Read the article

  • Ubuntu server: Delete first folder in directory

    - by Martin
    How can I grab the first subfolder in a directory and delete it? I found a script to monitor the free diskspace. It sends an alert email if space runs low, but I want to also delete some unneeded stuff. I have a backup folder where I save daily and monthly backups. I want to delete the first folder since this always the oldest, but I don't know the name of the oldest backup. My folders without Jan-May and Dec: 06-01 07-01 08-01 09-01 10-01 11-01 Friday Monday Saturday Sunday Thursday Tuesday Wednesday How can I delete the first folder "06-01" without knowing its name?

    Read the article

  • redundant/multi-site terminal server

    - by Adam
    Hi We have a Hyper-V cluster running 5 virtual terminal servers using HA. We need to be able make this system redundant and so if this site was to fail our users could log into the backup system at another location and access their data via the terminal servers. Any ideas? We were thinking of maybe using a NAS which replicated the data to the other location in real-time(pass-through disks)? and having a similar Hyper-V cluster setup in the backup location. However we would need to create the users in both location and create a virtual mirror without the data ie applications, directories, settings etc. Is this the best way to achieve this? We have read that using Hyper-v pass through disks is a big performance de-grade.

    Read the article

  • SQL restore from single file db to filegroup

    - by Mauro
    I have a 180GB MOSS 2007 database whose maintenance (i.e. backups and restores) are becoming a problem. Part of the issue can be resolved by splitting the three content sites down into their own site collections, however this will likely still leave me with a 100gb DB to deal with. Whilst this isnt entirely problematic for SQL it does mean that backups / restores take far too long. my idea is to split each of the databases to 30gb files, then to import the content into them which should distribute the content across the file groups,making it much easier / faster to backup/restore. Is there a way to backup from a single file and restore to a filegroup? If i have the wrong understanding of filegroups then I'm more than happy to find out other methods of managing the size of databases.

    Read the article

  • Word-wrap grid cells in Ext JS

    - by richardtallent
    (This is not a question per se, I'm documenting a solution I found using Ext JS 3.1.0. But, feel free to answer if you know of a better solution!) The Column config for an Ext JS Grid object does not have a native way to allow word-wrapped text, but there is a css property to override the inline CSS of the TD elements created by the grid. Unfortunately, the TD elements contain a DIV element wrapping the content, and that DIV is set to white-space:nowrap by Ext JS's stylesheet, so overriding the TD CSS does no good. I added the following to my main CSS file, a simple fix that appears to not break any grid functionality, but allows any white-space setting I apply to the TD to pass through to the DIV. .x-grid3-cell { /* TD is defaulted to word-wrap. Turn it off so it can be turned on for specific columns. */ white-space:nowrap; } .x-grid3-cell-inner { /* Inherit DIV's white-space from TD parent, since DIV's inline style is not accessible in the column definition. */ white-space:inherit; } YMMV, but it works for me, wanted to get it out there as a solution since I couldn't find a working solution by searching the Interwebs.

    Read the article

  • Flash: Using mouse wheel events in full screen mode (Windows and Mac)

    - by Amir
    Although Flash has a mouse wheel event (MouseEvent.MOUSE_WHEEL), it comes with quite a few problems. The first is that the event is not yet supported on the Mac. So there are a bunch of solutions, all of which (basically) capture the mousewheel (or DOMMouseScroll) event in javascript and pass it into the flash app. Luckily, under all the Mac browsers I tested, this also works when flash is in fullscreen mode. Problem 2 is that flash ignores mouse wheel events with small "deltas". For example, Microsoft's IntelliPoint Mice with "Smooth Scroll" causes this problem. A solution to this is the same as the solution for the mac... i.e. capture the javascript mouse wheel event in the browser and pass it to the app. The issue is that of the browsers in windows that I tested (firefox, ie, safari, and chrome), they don't seem to capture this event when flash is in full screen mode. Does anyone know why or how to fix that? I currently have a hybrid solution that always takes events from javascript (in non-fullscreen or fullscreen mode) except when it's in fullscreen mode on Windows (at which point it takes them from the flash mousewheel event). So the only times it fails is in full screen mode on Windows with a mouse that has small deltas. Anyone have a full solution? Or just a better one?

    Read the article

  • 'cp' skips some of Eclipse's dot directories

    - by Dustin Digmann
    I am trying to backup my Eclipse .metadata directory. The command I run is: cp -Rf ~/some/where/.metadata/* ~/some/backup/.metadata/. The first time I tried this, the copy skipped the lock file and the .plugins and .mylyn directories. After doing some research, I found some threads mentioning permission changes. I applied the changes and found some success. Now, running the script will not create or traverse into the .plugins or .mylyn directories. Additional research has come up with zero results. I am using: Windows XP SP 3 Cygwin 1.7.1-1

    Read the article

  • Python: circular imports needed for type checking

    - by phild
    First of all: I do know that there are already many questions and answers to the topic of the circular imports. The answer is more or less: "Design your Module/Class structure properly and you will not need circular imports". That is true. I tried very hard to make a proper design for my current project, I in my opinion I was successful with this. But my specific problem is the following: I need a type check in a module that is already imported by the module containing the class to check against. But this throws an import error. Like so: foo.py: from bar import Bar class Foo(object): def __init__(self): self.__bar = Bar(self) bar.py: from foo import Foo class Bar(object): def __init__(self, arg_instance_of_foo): if not isinstance(arg_instance_of_foo, Foo): raise TypeError() Solution 1: If I modified it to check the type by a string comparison, it will work. But I dont really like this solution (string comparsion is rather expensive for a simple type check, and could get a problem when it comes to refactoring). bar_modified.py: from foo import Foo class Bar(object): def __init__(self, arg_instance_of_foo): if not arg_instance_of_foo.__class__.__name__ == "Foo": raise TypeError() Solution 2: I could also pack the two classes into one module. But my project has lots of different classes like the "Bar" example, and I want to seperate them into different module files. After my own 2 solutions are no option for me: Has anyone a nicer solution for this problem?

    Read the article

  • Can't login to SQL Server after moving machine to different office/domain

    - by Dan
    Our company has just been bought and the over the weekend I have brought up the last few machines to plug into their network (they are under a different Windows Domain). The last machine is our Vault system and the SQL Server was using Windows Authentication. I have plugged it into their network and its working fine, but i cannot connect to SQL Server with Management Studio and, I fear, no backup jobs will also be working. When I try to login under Windows Auth, it has the user name of "NEWDOMAIN\Administrator" (greyed out) and then presents a "login failed" message with error code "18456". Can anyone help me with this, or will I just have to reinstall SQL Server, Vault and restore the backup I took before the move?

    Read the article

  • Method for defining simultaneous has-many and has-one associations between two models in CakePHP?

    - by Hobonium
    One thing with which I have long had problems, within the CakePHP framework, is defining simultaneous hasOne and hasMany relationships between two models. For example: BlogEntry hasMany Comment BlogEntry hasOne MostRecentComment (where MostRecentComment is the Comment with the most recent created field) Defining these relationships in the BlogEntry model properties is problematic. CakePHP's ORM implements a has-one relationship as an INNER JOIN, so as soon as there is more than one Comment, BlogEntry::find('all') calls return multiple results per BlogEntry. I've worked around these situations in the past in a few ways: Using a model callback (or, sometimes, even in the controller or view!), I've simulated a MostRecentComment with: $this->data['MostRecentComment'] = $this->data['Comment'][0]; This gets ugly fast if, say, I need to order the Comments any way other than by Comment.created. It also doesn't Cake's in-built pagination features to sort by MostRecentComment fields (e.g. sort BlogEntry results reverse-chronologically by MostRecentComment.created. Maintaining an additional foreign key, BlogEntry.most_recent_comment_id. This is annoying to maintain, and breaks Cake's ORM: the implication is BlogEntry belongsTo MostRecentComment. It works, but just looks...wrong. These solutions left much to be desired, so I sat down with this problem the other day, and worked on a better solution. I've posted my eventual solution below, but I'd be thrilled (and maybe just a little mortified) to discover there is some mind-blowingly simple solution that has escaped me this whole time. Or any other solution that meets my criteria: it must be able to sort by MostRecentComment fields at the Model::find level (ie. not just a massage of the results); it shouldn't require additional fields in the comments or blog_entries tables; it should respect the 'spirit' of the CakePHP ORM. (I'm also not sure the title of this question is as concise/informative as it could be.)

    Read the article

  • Simplest way to shrink transaction log files on a mirrored production database

    - by MGOwen
    What's the simplest way to shrink transaction log file on a mirrored production database? I have to, as my disk space is running out. I will make a full database backup before I do this, so I don't need to keep anything from the transaction log (right? I have daily full database backup, probably never need point-in-time restore, though I'll keep the option open if I can - that's all the .ldf is really for, correct?). (Hope this isn't an exact duplicate, I read a lot of questions but couldn't find this exact scenario elsewhere).

    Read the article

  • VS2010: Why do my custom Toolbox tabs and contained controls keep disappearing?

    - by Velika2
    This is how I expected the toolbox to work: Let's say I add a custom Tab to the Toolbox called "Ajaxtoolkit." To add controls to the new tab, I right mouse click and select "Choose Items" and browse to a file, Ajaxtoolkit.dll, that is of a particular version number. I would expect that when I save and reopen the solution, that the Ajax Toolkit custom tab would still be in my Toolbox and that it would contain the same controls that were there last time, the controls that were in the dll that I referenced when the controls were added. If I created a brand new web app, I (possibly) wouldn't expect to see the same Ajax Toolkit custom tab. However, I could perform the same steps as above and add a "Ajax Toolkit" tab and perhaps, this time, select a DIFFERENT VERSION of the tookit, and the state of the toolkit would be retained with each solution file. Another possibility would be for the original Ajaxtoolkit to be retained when the 2nd web solution is created, and perhaps, if I wanted to mix versions of the toolkit across diffreent web sites in my solution, I should start naming my custom toolkit tabs with version specific names like "Ajaxtoolkit 4.0," etc. ...But instead, the Ajaxtoolkit tab disappears when I close VS2010 and reopen it. Why? Is this desirable behavior or a bug?

    Read the article

  • Service reference error when moving dev. environment from XP to W7

    - by Peter
    Hi, I am building an application and I am using web services for getting data from a server. It was working fine when I was developing on my XP machine but had to switch to Windows 7. On the new machine I grabbed the latest version of the code using sourcesafe. However, when I try to add a service reference in the solution or update an existing one I get the following error: There was an error downloading 'http://localhost:52490/Service/CustomerService.asmx'. The request failed with the error message: Server Error in '/' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not create type 'Digital_Server.CustomerService'. Source Error <%@ WebService Language="vb" CodeBehind="CustomerService.asmx.vb" Class="Digital_Server.CustomerService" % Source File: /Service/CustomerService.asmxLine: 1 Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927 --. Metadata contains a reference that cannot be resolved: 'http://localhost:52490/Service/CustomerService.asmx'. An error occurred while receiving the HTTP response to http://localhost:52490/Service/CustomerService.asmx. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the service shutting down). See server logs for more details. The underlying connection was closed: An unexpected error occurred on a receive. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. An existing connection was forcibly closed by the remote host If the service is defined in the current solution, try building the solution and adding the service reference again. Does it has anything to do with the IIS or is it any configuration file I have to change in the solution? Any help is appreciated.

    Read the article

  • DataSets to POCOs - an inquiry regarding DAL architecture

    - by alexsome
    Hello all, I have to develop a fairly large ASP.NET MVC project very quickly and I would like to get some opinions on my DAL design to make sure nothing will come back to bite me since the BL is likely to get pretty complex. A bit of background: I am working with an Oracle backend so the built-in LINQ to SQL is out; I also need to use production-level libraries so the Oracle EF provider project is out; finally, I am unable to use any GPL or LGPL code (Apache, MS-PL, BSD are okay) so NHibernate/Castle Project are out. I would prefer - if at all possible - to avoid dishing out money but I am more concerned about implementing the right solution. To summarize, there are my requirements: Oracle backend Rapid development (L)GPL-free Free I'm reasonably happy with DataSets but I would benefit from using POCOs as an intermediary between DataSets and views. Who knows, maybe at some point another DAL solution will show up and I will get the time to switch it out (yeah, right). So, while I could use LINQ to convert my DataSets to IQueryable, I would like to have a generic solution so I don't have to write a custom query for each class. I'm tinkering with reflection right now, but in the meantime I have two questions: Are there any problems I overlooked with this solution? Are there any other approaches you would recommend to convert DataSets to POCOs? Thanks in advance.

    Read the article

  • Troubleshoot odd large transaction log backups...

    - by Tim
    I have a SQL Server 2005 SP2 system with a single database that is 42gigs in size. It is a modestly active database that sees on average 25 transactions per second. The database is configured in Full recovery model and we perform transaction log backups every hour. However it seems to be pretty random at some point during the day the log backup will go from it's average size of 15megs all the way up to 40gigs. There are only 4 jobs that are scheduled to run on the SQL server and they are all typical backup jobs which occur on a daily/weekly basis. I'm not entirely sure of what client activity takes place as the application servers are maintained by a different department. Is there any good way to track down the cause of these log file growths and pinpoint them to a particular application, or client? Thanks in advance.

    Read the article

  • MySQL: Auto-increment value: 0 is smaller than max used value: xx

    - by Rhodri
    Increasingly I'm getting tables having to be repaired dwith the message returned of: Auto-increment value: 0 is smaller than max used value: xx This has happened on tables with 200 rows and tables with ~3 million rows, but so far the same few tables have had the problem. I'm running MySQL 5.0.22. The repairs are run by a script which checks every minute for the need to repair MySQL tables. I also have an automated backup of the 6 Gigabyte database running very two hours and the repairs always get trigged around the time of the backup. Any ideas?

    Read the article

  • How can I move mysites to a new location

    - by Bob
    I recently restored my content and was instructed to create mysites in a different location than was originally used. Now I have several users mysites in /personal. The new desired location is /mysites. From what I found in the documentation I should back them up and restore them to the new location. Here's what I've done: Backup individual site collection for user mysite stsadm -o backup -url "https://myUrl/personal/john_smith" -filename johnsmith.bkup Restore individual site collection for user mysite stsadm -o restore -url "https://myUrl/mysites/john_smith" -filename johnsmith.bkup -overwrite The result of this and the problem is when i enumerate sites i end up with this: <Site Url="https://myUrl/mysites" Owner="domainname\john.smith" ContentDatabase="WSS_Content_MySites" StorageUsedMB="1.6" StorageWarningMB="90000" StorageMaxMB="100000" /> it leaves off the username part of the url. and if I restore more that one they want to overwrite each other.

    Read the article

  • Find min. "join" operations for sequence

    - by utyle
    Let's say, we have a list/an array of positive integers x1, x2, ... , xn. We can do a join operation on this sequence, that means that we can replace two elements that are next to each other with one element, which is sum of these elements. For example: - array/list: [1;2;3;4;5;6] we can join 2 and 3, and replace them with 5; we can join 5 and 6, and replace them with 11; we cannot join 2 and 4; we cannot join 1 and 3 etc. Main problem is to find minimum join operations for given sequence, after which this sequence will be sorted in increasing order. Note: empty and one-element sequences are sorted in increasing order. Basic examples: for [4; 6; 5; 3; 9] solution is 1 (we join 5 and 3) for [1; 3; 6; 5] solution is also 1 (we join 6 and 5) What I am looking for, is an algorithm that solve this problem. It could be in pseudocode, C, C++, PHP, OCaml or similar (I mean: I woluld understand solution, if You wrote solution in one of these languages). I would appreciate Your help.

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >