Search Results

Search found 59118 results on 2365 pages for 'data persistence'.

Page 1011/2365 | < Previous Page | 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018  | Next Page >

  • What's a fast way to copy a lot of files from an internal hard-drive to external (USB) storage?

    - by jonathanconway
    I have a large amount of data - about 500 GB - on the internal hard drive of a desktop PC. This includes music, videos, PDFs... you name it. I want to copy everything to an external USB hard drive (1.5 tb capacity). The desktop PC runs Ubuntu. To being with, I simply plugged in and mounted the hard drive and dragged the top-level folder onto the drive. It's started copying, but it seems to be proceeding very slowly. About 10 minutes later and it's only done about 500 MB. I'm sure this is slower than what I could achieve with less total data. So I'm wondering if there's a quicker way of doing this. Would it be better to copy it in portions of 500MB or so, rather than all at once?

    Read the article

  • smartOS HPC config suggestion

    - by Andrew B.
    I'm configuring a brand new HPC server and am interested in using SmartOS because of it's virtualization control and zfs features. Does this configuration make sense for a SmartOS HPC, or would you recommend an alternative? System Specs: 2x 8-core xeon 384 GB RAM 30 TB HDs with 2x512GB SSDs Uses: - zfs for serving data to different vms, and over the network; 1 SSD for L2ARC and 1 for ZIL - typically 1-2 ubuntu instances running R and custom C/C++ code My biggest concerns as a newbie to SmartOS and ZFS are: (1) will I get near-metal performance from ubuntu running on SmartOS if it is the only active vm? (2) how do I serve data from the global zfs pool to the containers and other network devices?

    Read the article

  • Using Windows Azure storage for backup

    - by Bruno
    I am currently looking at Windows Azure blobs as an option for backing up archive data. I want to be able to upload files from an external windows machine via the internet but I don't know enough about Windows Azure storage to make a decision. Some of the questions I have are How do I upload the files. Is there a client application, can I use robocopy? Would it be fast enough? i.e. Could I download or upload 1TB of data in a week? Is it secure? Hopefully someone smarter than me can help me :-)

    Read the article

  • In linux: how to exit a program but not kill it?

    - by biomed
    I use Ubuntu 10.10 and I have a python program (mnemosyne) that I synchronize the data files using dropbox. If I forget to close (exit) this program. Here is my problem scenario. I leave the program running at home and go to work but if I open the program at work and work on it the data file is changed and I loose my progress at home when I exit (it automatically saves) when exitimg. I thought I could create a cron job to automatically close mnemosyne every morning regardless os me remembering to do it or not but if I use kill the program exits without saving the datafile and I end up with a tmp file and an error message when I restart it. Is there a better way of sending the exit signal to this program emulating me clicking fileexit menu option. Thanks

    Read the article

  • Existing tables with binaries to use filestream

    - by user1098487
    I've got a few tables for which I want to use filestream storage. These tables already contain binary data and have rowguids. However at the time they were were created, the tables were not added to a filestream enabled filegroup. What is the best way to have these tables use filestream at this point? Do I need to drop + recreate the tables and migrate the data? Is there an easier way? The database already has filestream enabled and there are other tables which are using them.

    Read the article

  • Issue with broken disk on Solaris with raidctl - how to proceed

    - by weismat
    I have a SunFire T2000 server which has 2 mirrored disks pairs. The server required an exchange of the system battery. After swaping the battery first no disks were found. After booting from CD we managed to find the disks, but now one disk is broken and the raidctl reports a failed synchronisation. The boot process stops now when trying to mount the file systems. The power light of the broken drive is not even blinking. What is the best way to proceed now ? Fortunately I could live with loosing the data on the drive as it is backed up, but I would like to keep the rest of the data as it contains /etc and get the server booting again.

    Read the article

  • How do I recover a RAID 1 volume on Mac OS X (10.7)?

    - by Avry
    I have a Synology NAS that I've set up with RAID 1. The device is set up with two drives, both the same size (i.e. 500 GB each), formatted in ext3, as a RAID 1 volume (i.e. even though the total capacity is 1TB, I effectively only get 500 GB). In the case of a device failure where I can only access one of the drives, how can I recover my data? The solution I'm looking for is something like: 'Put the working drive in an enclosure, and use <some software> to recover your data.'

    Read the article

  • How to duplicate a backup set from one media server to another

    - by MathematicalOrchid
    I really honestly can't figure out how to do this. It's easy enough to open Backup Exec and tell it to duplicate the data on one local device onto another local device. What I cannot figure out how to do is make it duplicate data from one local device to a remote device. I can connect to the remote BE server, but then I can only access the remove devices. I can connect to the local BE server, but then I can only access the local devices. I can't figure out how the heck to get access to both local and remove devices simultaneously. Symantec Backup Exec 12.5 for Windows, in case it matters.

    Read the article

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • Outlook Registry Key Damaged; Tried "Fix It" and lost everything

    - by Ray
    My outlook 2007 (on Windows 7 64 bit) worked fine for two weeks. I then installed a printer/scanner/copier and the Outlook Window wouldn't open. I went to Microsoft's website and found a page that said my registry key was damaged. The page had a link to a Fix It program. I ran the program and it looks like all my Outlook data was wiped out. Can I get the data back? For future reference, how should I protect myself if the key goes bad again? Do you think I should unistall Outlook and re-install?

    Read the article

  • Crap, hard disk failure. Can I recover my "move"d folders?

    - by Doug
    I am in the process of moving all my files from an old laptop to new one. I just moved 11gb of data from my old laptop to a hard drive (external) and then upon moving it out to the new hard drive, the hard drive is getting a CRC (Data Error (Cyclic Redundancy Check). Now I am looking for a solution to recover the files that I moved on my old laptop (not the external). I understand they they are just marked for potential overwriting to free up space. I was getting ready to test out GetDataBack, but it says to install it on a healthy windows and use the recover-needed drive as an external. However, I don't want to turn off my computer without first getting the okay since it is in a "moved" state. Please help! What can I do to recover the Moved files. I haven't touched the computer since it has been moved. What can I use to recover them?

    Read the article

  • RAID 10, how layout works ?

    - by Bastien974
    I'm trying to figure out how exactly works the RAID 10 in linux with mdadm. I want to create a RAID 10 out of 4 partitions, let's say a, b, c and d. a and b are on the array 1, c and d array 2. So what I want is to have the couple a and b, c and d in RAID 0. Then on top of that, a RAID 1. The option in the mdadm command to configure the layout is -p, --layout with option : near, far, offset see here I want to keep my data safe if the array 1 fails for example, that would mean that every chunk of data are always copied on both arrays. How do I have to set my RAID 10, near or far ?

    Read the article

  • Have a server, need to figure out a method of backup

    - by PolishHurricane
    My company has an older Dell 2650 server running ArchLinux x64: http://www.dell.com/downloads/global/products/pedge/en/2650_specs.pdf (2 x 2.4GHz Intel Xeon w/around 3287 RAM according to "free -m") We use it to host our internal company site and to post some information from our orders to and we'd like the ability to keep it up as much as possible. What we require: - It needs to always be functional from 8am to 4pm for our data entry person to use it and others to do other things required on it. - If it goes down, we need a quick way to get the machine running again. - If it goes down, we would like to have the data backed up. Some of the major problems include: - The servers old and it may have memory issues - We don't know when one of the hard drives could fail - Our power goes out here once in a while We have a battery backup, but that's pretty much it and it's not for long term. If the server does go down, we have another system in place to store order information that comes in while it's down and repost it when it's back, but we need it up during the day. So we're wondering, what should we get for options? These are the things we thought of, sort of: Setup RAID 1, but that would involve wiping everything right? If we do that, how would we transfer the data over without messing up the server? We could buy an extra server or 2 off eBay for $100, the same model, is that practical or should we get something else? Should we buy a PC or another better server and host off that because it is if anything easier to exchange parts? Should we keep extra parts handy incase it implodes? Should we buy/use backup software? We hear drobo's are cool, but suck. Perhaps there is a software solution to this problem that backs up to another machine or gets us up and running again quickly. Also, if we are to purchase hardware, what is decent? Does anybody know of one for ArchLinux/Linux? We both know a ton about computers but we're kind of unsure what step to take with this, especially with this type of server. Thanks

    Read the article

  • Delete the pendrive contents and also trash in Mac OSX?

    - by Warrior
    I am using a Macbook pro. I copied some data from my pen drive to my mac and deleted the content by moving it to trash. After that when I see the info of pen drive it give more value than the original value. If I cleaned the content of the trash only I am able to see the correct value of pen drive and able to copy data. Has Mac been designed like that or is there some other way to delete other than using the "move to trash" option? Thanks.

    Read the article

  • Is there a way to rsync in batches?

    - by Chris
    I have a huge chunk of data (11G) in a subversion repository that I'm using rsync to migrate to Alfresco, which lucene indexes new files as they hit the file system. I'm using a dav mount as a proxy to allow me to rsync. The issue I'm having is the indexing post-rsync is quite an expensive operation for such a huge chunk of data, so I was wondering whether there's a way I could logically separate the rsync into identically-sized batches (say 500MB each) so I could schedule them in cron. At the moment, I'm traversing the top level folders and taking the smallest ones across first, but once I'm done with those, the much larger sub-directories are going to be quite troublesome. Please let me know if you need any further info. Thanks in advance.

    Read the article

  • Is it dangerous to add/remove a hard-drive to a Windows machine which is in stand by?

    - by Adal
    Can I add a SATA drive to a Windows 7 machine which is in standby mode? The hardware supports hot-plug. Could pulling the drive out while in standby corrupt the data on the drive (unflushed caches, ...)? Does Windows flush before standing by? How about swapping a drive with another drive of different kind (SSD - mechanical disk) and size, also while in stand-by. Could the OS when waking up believe that the old drive is still there, and write to it and thus corrupt it, since the new one has different partitions and data?

    Read the article

  • Sync two external harddrives?

    - by acidzombie24
    A little mishap happened earlier today and i am thinking i should have a copy of my external harddrive since 10% of it is very valuable. What is the best solution to keep two external harddrive in sync? i'll probably use one as regular and maybe use the other only to copy data. The easiest way to keep it in sync is to clear one drive and copy the other but 1T of data will take a long time. Whats a good existing app that will keep them in sync? freeware preferred.

    Read the article

  • Rsync over ssh with root access on both sides

    - by Tim Abell
    Hi, I have one older ubuntu server, and one newer debian server and I am migrating data from the old one to the new one. I want to use rsync to transfer data across to make final migration easier and quicker than the equivalent tar/scp/untar process. As an example, I want to sync the home folders one at a time to the new server. This requires root access at both ends as not all files at the source side are world readable and the destination has to be written with correct permissions into /home. I can't figure out how to give rsync root access on both sides. I've seen a few related questions, but none quite match what I'm trying to do. I have sudo set up and working on both servers.

    Read the article

  • Linux script that indicates time the server was offline?

    - by RD
    Below is data taken from my dedicated server: root@namhost [~]# last root pts/0 XXX Tue May 18 09:46 still logged in root pts/0 XXX Mon May 17 08:51 - 12:18 (03:26) reboot system boot XXX Mon May 17 08:49 (1+00:59) root pts/0 XXX Sun May 16 11:50 - 13:15 (01:25) root@namhost [~]# last | grep "system boot" reboot system boot 2.6.18-164.15.1. Mon May 17 08:49 (1+01:02) reboot system boot 2.6.18-164.el5 Tue May 11 04:20 (7+05:31) reboot system boot 2.6.18-164.el5 Tue May 11 03:53 (7+05:58) reboot system boot 2.6.18-128.el5 Mon Oct 5 22:40 (-3:-50) .... I need a script that I can run on an hourly basis that will: 1. Calculate the total downtime since the first date 2. The overall downtime percentage 3. Store this data in a file at /home/bla/file.txt, in the following format: TotalDowntime=03:02:02 Average=0.01% How do I go about doing this?

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment

    - by user69508
    (Please forgive if this is posted in an incorrect forum. We didn’t know exactly where to post it.) We have an ASP.NET Web API single page application - a browser-based app running in IIS to serve up HTML5/CSS3/JavaScript, which talks to the ASP.NET Web API endpoint only to access a database and transfer JSON data. Everything is working great in our development environment - that is, we have one Visual Studio solution with an ASP.NET Web API project and two class library projects for data access. While development and testing on development boxes, using IIS Express to a localhost:port to run the site and access the Web API, everything is fine. Now we need to move it to a production environment (and we’re having problems - or just not understanding what needs to be done). The production environment is all internal (nothing will be exposed on the public Internet). There are two domains. One domain, the corporate domain, is where all users login normally. The other domain, the process domain, contains the SQL Server instance that our app and Web API will need to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. So, I guess what they want is: corp domain (end users) <– firewall (open port 80) <– DMZ (web server running IIS) <– firewall (open port 80 or 1433????) <– process domain (IIS for Web API and SQL Server) We’re developers and don’t really understand all the networking aspects, so we’re wondering how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code (HTML5/CSS3/JavaScript/images/etc.) is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Or, does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to "http://server/appname/api/getitmes"? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? I’m sure there are other questions, but we’ll start with these. Thanks!!! (Note: the servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.)

    Read the article

  • The best way to make full system dump on Centos [duplicate]

    - by tester3
    This question already has an answer here: Centos 5 Full backup 1 answer I am on Centos 6.5 with a lot of soft and services installed and working. Also I've got a lot of configs which damaged my brain and I dont want to do it again:) So, can anyone please advice the best way to make a full system dump with all data, so I need only to copy-paste them to new system to get my system ready on the other machine. Or something like that? P.S. Data on my hdd is encrypted, and I'd liked and encrypted dump too. Please help:)

    Read the article

  • SQLite DB borked when opened on a different machine

    - by pruefsumme
    Hello, I'm using SQLite to store some data. The primary database is on a NAS (Debian Lenny, 2.6.15, armv4l) since the NAS runs a script which updates the data every day. A typical "select * from tableX" looks like this: 2010-12-28|20|62.09|25170.0 2010-12-28|21|49.28|23305.7 2010-12-28|22|48.51|22051.1 2010-12-28|23|47.17|21809.9 When I copy the DB to my main computer (Mac OS X) and run the same SQL query, the output is: 2010-12-28|20|1.08115035175016e-160|25170.0 2010-12-28|21|2.39343503830763e-259|-9.25596535779558e+61 2010-12-28|22|-1.02951149572792e-86|1.90359837597183e+185 2010-12-28|23|-1.10707273937033e-234|-2.35343828462275e-185 The 3rd and 4th column have the type REAL. Interesting fact: When the numbers are integer (i.e. they end with ".0"), there is no difference between the two databases. In all other cases, the differences are ... hm ... surprising? I can't seem to find a pattern. If someone's got a clue - please share!

    Read the article

  • MS-Outlook add-on to move a new message to the same folder as the rest of the thread

    - by Guss
    I'm forced to use MS-Outlook in my job, while I very much like the feature that shows all the messages of a discussion thread (that are stored in different folders) in the inbox when a new message is received for that thread, if the previous messages are in a different data file (which I'm forced to have as the MS-Exchange server quota is very very small) then the message list only shows the name of the data file and not the name of the folder where the messages are stored. As a result, because I file my message by context (i.e. all the emails for project A go into a "Project A" folder, etc), and its important for me to have all the messages in a single thread in the same folder, it is sometimes hard to figure out into which folder should I file the new message. It would be great help if there was some add-on or VBA script that I could add to my setup that will offer a shortcut key or a button to "file this message to the same folder as the previous messages in the conversation thread".

    Read the article

  • Is anyone working on an encfs client for windows?

    - by snth
    I've been looking into encfs as a solution to encrypt my personal data. However I want to access this data both on Linux and Windows on different machines (synced through Dropbox). So far all Google searches have brought up pages which specify that there is no Windows client that reads encfs. Therefore my question is: is anyone working on a Windows client for encfs? It would be really useful if someone was and it seems to be a common enough issue raised that I have a glimmer of hope that someone might be working on it.

    Read the article

  • TFS 2010 migration from one server to another

    - by Kabir Rao
    We have followed- http://msdn.microsoft.com/en-us/library/ms404869(v=vs.100).aspx every steps of this extremely poorly worded article. We are not able see Dashboards of SharePoint projects. In some cases(mostly scrum projects, i guess), i get "The Webpage can not be found". In other cases- Unable to refresh data for a data connection in the workbook. Try again or contact your system administrator. The following connections failed to refresh: TfsOlapReport Any help would be very much appreciated.

    Read the article

< Previous Page | 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018  | Next Page >