Search Results

Search found 7618 results on 305 pages for 'backup exec'.

Page 214/305 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • aws s3 works with script but not on cron

    - by user3800017
    guys.. My first post ! hope not the last .. I have few bunch of servers on aws ec2 platforms. I made a simple script to backup my custom logs on their s3 storage bucket. The problem is the script works fine .. but I tried to add it to the crontab. And the script executes but not the s3 sync/mv part ! Here is my code: NOW=$(date "+%b_%d_%Y") MY_HOSTNAME=`uname -n` mv /opt/req/req* /opt/req/bkup/ mv /opt/response/res* /opt/req/bkup/ cd /opt/req/bkup/ tar -cvf ${MY_HOSTNAME}_req_bkup_${NOW}.tar re* rm *.txt aws s3 mv /opt/req/bkup/* s3://req `

    Read the article

  • sed replacement does not work

    - by Robin Hood
    Hello, I have trouble using sed. I need to replace some lines in very deprecated HTML sites which consist of many files. My script does not work and I do not why. When I tried to find exact pattern with Netbeas it worked. find . -type f -name "*.htm?" -exec sed -i -r 's/ing\. Šuhajda Dušan\, Mírová 767\, 518 01 Dobruška\, \+420 737 980 333\,/REPLACEMENT/g' {} \; Where is the mistake? Is there an alternative to replace text without searching regular expression but plain text? Thanks for any respond.

    Read the article

  • fork within Cocoa application

    - by liuliu
    My problem is not the best scenario for fork(). However, this is the best func I can get. I am working on a Firefox plugin on Mac OSX. To make it robust, I need to create a new process to run my plugin. The problem is, when I forked a new process, much like this: if (fork() == 0) exit(other_main()); However, since the state is not cleaned, I cannot properly initialized my new process (call NSApplicationLoad etc.). Any ideas? BTW, I certainly don't want create a new binary and exec it.

    Read the article

  • Use LINQ to SQL results inside SQL Server stored procedure

    - by ifwdev
    Note: I'm not trying to call a SQL Server stored proc using a L2SQL datacontext. I use LINQPad for some fairly complex "reporting" that takes L2SQL output saved to an Array and is processed further. For example, it's usually much easier to do multiple levels of grouping with LINQ to Objects instead of trying to optimize a T-SQL query to run in a reasonable amount of time. What would be the easiest way to take the end result of one of these "applications" and use that in a SQL Server 2008 stored proc? The idea is to use the data for a Reporting Services Report, rather than copying and pasting into Excel (manual labor). The reports need to be accessible on the report server (not using the Report Server control in an application). I could output CSV and read that somehow via command line exec, but that seems like a hack. Thanks for your help.

    Read the article

  • How to show a warning message when entering a folder?

    - by Valter Henrique
    I don't know if this is possible, but, I have a folder which I would like to show some warning message when the user enters in it. In my case would say that the folder could be deleted without previous warning to save some disk space. I already create a file inside the folder with the warning message: WARNING! ########################################################################################################################################################## Please, be advised, that the folder /company-backup/amazon-s3 can be deleted without previous WARNING to save disk space as the INFRASTRUCTURE TEAM judge necessary. Best regards, Infrastructure Team. ########################################################################################################################################################### Is that possible ? Any idea ?

    Read the article

  • Switch statement for string matching in JavaScript

    - by yaya3
    How do I write a swtich for the following conditional? If the url contains "foo", then settings.base_url is "bar". The following is achieving the effect required but I've a feeling this would be more manageable in a switch: var doc_location = document.location.href; var url_strip = new RegExp("http:\/\/.*\/"); var base_url = url_strip.exec(doc_location) var base_url_string = base_url[0]; //BASE URL CASES // LOCAL if (base_url_string.indexOf('xxx.local') > -1) { settings = { "base_url" : "http://xxx.local/" }; } // DEV if (base_url_string.indexOf('xxx.dev.yyy.com') > -1) { settings = { "base_url" : "http://xxx.dev.yyy.com/xxx/" }; } Thanks

    Read the article

  • how to debug mysql has gone?

    - by fefe
    I have a virtual machine(Ubuntu 12.04, MySQL 5.5) running under VMware and is dedicated to host a mysql server. I connect to this server on internal IP. I'm trying to find out why I get mysql server has gone error. One my windows machines apache it stops because of this issue. I have been trying to fine tune my mysql my.cnf with the following parameters but did not bring the desired result. # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 0.0.0.0 # # * Fine Tuning # wait_timeout = 180 key_buffer = 384M max_allowed_packet = 64M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 500 table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M how to debug this issue what is missing from configuration to avoid this error?

    Read the article

  • ext4: error loading journal

    - by cloudyOutside
    I have an external hard drive with two partitions: A small FAT32 which is mostly empty and works fine and a large ext4 with tons of data, most of which isn't backed up. The ext4 is visible, but can't be mounted. I get an "error loading journal" error. The drive is a Western Digital Caviar Blue 500GB. Roughly 30GB of that is FAT32 and the rest is the ext4. The light on the enclosure turns red when reading from the bad partition. It was made by Cavalry. There wasn't any warning, but coincidentally, I've been thinking lately that I should get two large capacity drives for real backups. Is there anything that can be done? I'm not even sure I have enough storage to backup everything even if it is redeemable.

    Read the article

  • Bash script getting automatically deleted from Ubuntu 12.04 Server?

    - by Kris Anderson
    I'm running a bash script on an ubuntu 12.04 through cron. The script works fine for a few weeks (runs daily backups of websites, mysql databases, and copies to Amazon S3). However, twice now I've noticed that backups stopped happening. Both times the backup script (backupscript.sh) located in my home folder was no longer there. No one else has access to this server, so nothing was manually changed on the server and no one deleted the file by mistake. The cron job (nano /etc/crontab) still references this script, but the script itself disappears. What could cause this to happen? Does Ubuntu delete the script if it runs into some sort of error?

    Read the article

  • How to copy Netscape email

    - by Olav
    I think I have the Netscape mail-directory from old computer, how do I copy it to new computer? (Netscape 7.1 Mail, Thunderbird or Seamonkey). I think I have the files in Olduserbackup\xjuwtwtb.slt\Mail I create a new mail account with server pop.superuser.com, and find a directory with that name in C:\Users\myusername\AppData\Roaming\Mozilla\Profiles\default\ou6umlif.slt\Mail I replace the files with those from the backup, but Netscape still shows pop.superuser.com in its interface. Is there some kind of registry setting somewhere I will have to change?

    Read the article

  • Deleting Time Machine in Mac OS X 10.6.4

    - by cappuccino
    Does anyone know how to delete Time Machine in Mac OS X 10.6.4? Before answering: sudo rm -rf /whateverthetimemachineis does not work Disabling the ACL permissions first with sudo fsaclctl -p /whatever -d does not work, sudo: fsaclctl: command not found Use the delete all backup feature in Time Machine... this is slow as hell, would take days. Need a command line solution. No I don't want to reformat the drive, I have other content on it, and no don't say I should have separated on two partition or two drives, I did it this say since partitions cannot be dynamically changed, and two drives is annoying since, whats the point of having a big drive?... plus has no relation to the issue at hand. Already googlied for hours and read everything on Super User, nothing working. and all solutions are the first 4. Any clues?

    Read the article

  • Updating to Exchange 2013 - any way to do it now?

    - by TomTom
    Exchange 2013 is out, available for some epople already. Got if from the VLC Center, now trying to get an upgrade path that works for some customers. Problem: There is no upgrade. It is "install on new Server, move mailboxes. This means coexistence with Exchagne 2010 for the time to move the Mailbox. Sadly the only compatible Exchange is Exchange 2010 Sp3 - which is not going to be bout for quite some time. Any way to still do an update? Backup, restore to new Server? Any beta of the SP that is good enough to ONLY move the mailboxes? I do not care about the rest - this really is "install Exchange 2013, move mailboxes, UNINSTALL 2010". I am quite - ah - unhappy that at the end the only one who will be able to intall 2013 are new companies right now.

    Read the article

  • Different Paramater Value Results In Slow Query

    - by alphadogg
    I have an sproc in SQL Server 2008. It basically builds a string, and then runs the query using EXEC(): SELECT * FROM [dbo].[StaffRequestExtInfo] WITH(nolock,readuncommitted) WHERE [NoteDt] < @EndDt AND [NoteTypeCode] = @RequestTypeO AND ([FNoteDt] >= @StartDt AND [FNoteDt] <= @EndDt) AND [FStaffID] = @StaffID AND [FNoteTypeCode]<>@RequestTypeC ORDER BY [LocName] ASC,[NoteID] ASC,[CNoteDt] ASC All but @RequestTypeO and @RequestTypeF are passed in as sproc parameters. The other two are built from a parameter into local variables. Normally, the query runs under one second. However, for one particular value of @StaffID, the execution plan is different and about 30x slower. In either case, the amount of data returned is generally the same, but execution time goes way up. I tried to recompile the sproc. I also tried to "copy" @StaffID into a local @LocalStaffID. Neither approach made any difference. Any ideas?

    Read the article

  • Shrinking physical volumes in LVM on a Linux Guest in ESXi 5.0

    - by Stew
    The problem: Linux guest (OpenSuse 12.1), with multiple virtual disks attached. 3 disks are in a logical volume, two of which are exactly 2TB. None of the disks are independent, and due to the backup software we use, cannot be independent. When the two 2TB virtual disks are "dependent", the snapshot fails stating that the file is too large for the datastore. When I put those two disks in independent mode, snapshots work fine (the other disk is 1.8TB). I have therefore concluded that even shrinking the two physical disks by 100GB should solve the problem, however I am having trouble conceptualizing how to go about getting those disks smaller without breaking the LVM entirely. The actual LV has 1.3TB free, so there is plenty of space to shrink with. What I need to accomplish: Deallocate 100GB from the two, 2TB virtual disks within the linux guest. Shrink the two virtual disks by 100GB within vsphere (not as complicated). Are there any vsphere/LVM gurus that can give me a clue?

    Read the article

  • Mac failing (failed?) hard drive - is all hope lost?

    - by Daniel
    It's a 500 GB Seagate laptop hard drive that came with my Macbook Pro. Apple partition format. Already replaced and now have it external, connected via SATA/USB adapter. Trying to get just a few files that I worked on while out of town when it crashed (and thus did not have my time machine backup drive). Drive will not mount, but OS X Disk Utility detects it and can read the capacity, model number, and even the name of the partition, which leads me to believe all hope may not be lost. Failed attempts so far: Disk Utility verify+repair says drive cannot be repaired and that I should back up immediately (lovely) Disk Warrior says it cannot rebuild the directory due to hardware failure Data Rescue quick & deep scans immediately failed PhotoRec says "error reading sector" for every sector (at least for the few minutes I let it run before closing it to explore other options) What else can I try here? Again, I'm just looking for a few, small files (python scripts to be specific) - not a full recovery.

    Read the article

  • Sql Server 2000 Stored Procedure Prevent Parallelism or something?

    - by user187305
    I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it. I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster. From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit. When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds? Thank you, any ideas would be great... I'm stuck.

    Read the article

  • Switch to another CPanel server, restore mailbox, how to stop Outlook POP3 pull mail data again.

    - by Shiro
    Hi, We switch from a linux CPanel server to another CPanel server, all the setting and file is restore back to the new Linux CPanel server. E.g [email protected] , inside the server I got 100mails. It had been downloaded to my Outlook, my outlook also got 100 mails inside. After I switch to new server, the 100 mail in the old server had been transfer to new server. However, when I open my Outlook, it start to download the 100 mails again, that means I got 200 mails in my mailbox and it is duplicated. How can I stop Outlook to download. The time stamp, receiver and sender is exactly the same. We want to keep a backup in the server, because we don't have a physical server to keep the mail. That why we would like to transfer the mailbox to the new server. Anyone have solution to stop Outlook download again?

    Read the article

  • DPM - Monitoring is green, Protection has error and Latest rec point is old. How do I interpret that?

    - by LosManos
    How do I read the DPM info in this case? Monitoring says Failed but Protection shows Ok while having a Latest recovery point from last year. Under Monitoring tab I have Failed for Source | Computer | Protection group | Start time Computer\System Protection | MyServerName | Recovery point | 2014-06-09 19:00:00 which shows me that something happened last night. But under Protection tab everything is green. Here I have Protection group member | | Protection status Protection group ..name.. Computer: MyServerName Computer\System protection Bare metal recovery OK ... Latest recovery point: 2013-12-12 06:32:54 My guess is that backup failed last night once, but succeeded later. It then found out that there hasn't been any change since sometime last year and leave it be and flags Ok.

    Read the article

  • Upgrade no raid server to raid

    - by AZee
    I have just learned that our PDC has a single drive with 2 partitions. I also know that this drive has bad blocks as recorded in the event log. What I would like to do is to convert this to a RAID solution with a nice balance between economy and performance. I will admit that I have only configured servers with RAID from scratch, and have no experience upgrading an existing system into a RAID system. In fact, I'm not sure it is even possible. Since this is the PDC for 350+ workstations downtime is important. I'd like to hear from other System Administators how they would tackle this and their recommendations for all devices. At this time it seems to me that I can replace the existing drive and then restore from backup or install a controller, drives, configure the RAID an basically start from scratch. Thank you for taking your time. ~AZee

    Read the article

  • Samba share doesn't have write permissions

    - by blsub6
    alright, I've got one that should be really simple. I want a wide open smb share for my Windows 7 machine. Everyone should be able to access it, regardless of domain or username or anything. My smb.conf has: security = share guest account = nobody Along with: [DC_Backup] path = /Windows_Backups/DC comment = Backup of Domain Controller force user = nobody guest ok = yes public = yes read only = no I can access it, but I cannot write to it. Windows keeps telling me I "need permission to perform this action" Where do I start?

    Read the article

  • Xen P2V for large physical hosts with much free space

    - by Sirex
    I need to P2V a rhel5 machine to xen under rhel5. I know I can use dd if=/dev/sda then using virt-install --import on the host, but the downside of this is the original machine has 80% free space on its drive. Does anyone know of (or can document) a quick and easy method which works reliably, to produce a bootable xen image which can run under a hvm in such cases ? I tried clonezilla to make the image, to avoid the free space problem, but it failed to do the clone with "something went wrong" (useless info, i know). At the moment im looking at doing a dd of each partition, and a file level copy of the partition which is mostly empty, then creating a new virtual disk, copying the partitions over to it by mounting both the new image and the virtual drive on a second vm, then copying the boot sectors over, then copying the file level backup..... there must be an easier way ? Oh, and budget is $0. :)

    Read the article

  • Multiple .bkf files created in Backupexec 12.5 or 2010 related to heavy I/O?

    - by syuusuke
    Hey everyone, I was wondering if anyone who has used backupexec 12.5 or 2010 have ever experienced multiple .bkf files created for a single job. To describe what I mean by multiple files, the .bkf are being created with random file sizes under 2GB even though I've assigned the setting to chop the file after 10GB size. Some jobs will create 20x .bkf files in 1 job with file chunks ranging from 50MB to 800MB sizes. Is this is a sign of heavy I/O issues? Bandwidth limitations? I'm not sure, I'm here to seek some advices and suggestions. I've setup another backup server with the same exact settings and they seem to create a new .bkf file when 10GB limit has been reached. Although I am backing up different machines but I know my settings are an exact match to the problematic or atleast I think it's a problem.

    Read the article

  • Looking for an actual experience of RAID 5 2 drive failure?

    - by Brian
    I'm wondering if anyone has any personal experience of RAID 5 2 drive failure with large drives? As I understand it, the theory is that with large 1-2TB drives, if one drive fails in the raid set, it needs to rebuild everything so is thus hitting all the other drives very hard, and the chance of another failure goes up, especially if the drives were from the same manufacturing batch. And if you lose another drive, you lose all the data. This is usually explained after the statement "RAID is not backup" which I agree with. The theory of this makes sense, and I understand it, but does it really happen?

    Read the article

  • Duplicate name exists solution

    - by user978733
    I have about 70 pc's with exactly same hardware. I decided to automate turning on and off. I took 1 PC. Here is what I've done: Changed bios configuration so that now pc's waking when I turn on AC switch Installed Windows XP and configured so that I can turn off remotelly, changed workgroup name to "WG1", and pc name to "ExamPC" Then created acronis backup image of this pc I installed this image in several PC's and tried to test All worked well till windows opened. The problem is, all tested PC's started Windows nearly at the same time, and all of them popped up error Duplicate name exist. I can't figure out any solution. Any suggestions?

    Read the article

  • Activating Windows 7 generates error code 0xc004F061

    - by Jon
    I got a new SSD and wanted to start over with Windows 7 on that disk. I did a clean install (my mistake) on the SSD and just went passed the activation part (left the key blank). Now that I have my system all setup, configured, files pulled back from backup, and ready to go, I'd like to activate Windows 7. However, I now get this error: The following failure occurred while trying to use the product key: Code: 0xC004F061 Description: The Software Licensing Service determined that this specified product key can only be used for upgrading, not for clean installations. Do I really need to wipe my system again, install Windows Vista, and then do the Windows 7 upgrade in order to use my upgrade key? Is there some kind of work around?

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >