Search Results

Search found 64995 results on 2600 pages for 'data import'.

Page 750/2600 | < Previous Page | 746 747 748 749 750 751 752 753 754 755 756 757  | Next Page >

  • Expendable, Redundant, Easily recoverable

    - by MeIr
    I am desperate at this point, I have been looking for "Big storage" solution for a while on my own and I can't find anything that would suite my needs. But now push came to shove. Current situation: I have about 6TB data storage (already full) - Drobo. Yesterday Drobo died on me and it put me into bad situation - I can't recover my data without buying another Drobo. From extensive research online I realized that Drobo is not the safest bet and by now it seems very poor choice. I ordered new Drobo to try to get my data back, however I don't want to be in the same situation later and continuing using Drobo promises this event to re-occur. What I am looking for: 1) Inexpensive setup. 2) Dynamically extendable - add more drives and/or replace a drive with bigger capacity. 3) Redundant - setup against 1-3 drive failure, will depend on total number of drives. For the sake of argument let's assume for every 4 drives one should be able to fail without data loss. 4) Easy data recovery - let's say unforeseen happens, I would like to be able to recover information without buying new tools or replacements - example: new Drobo. 5) Should be USB or Network Attach Storage 6) No demand on speed. Doesn't have to be fast, I am not doing video editing on the setup. However if option exists, would be nice to have a decent speed. After thoughts: I reviewed few options and FreeNAS looks nice, but it doesn't have #2 - Dynamic extendability. There are work around with Pools but it seems a bit complicated and unnecessary. More over it seems like data safety is a big question - saw some horror stories. Please advise on what options I have and what seems like an optimal solution (if any). I don't care if it has to be Windows or Linux box or any other OS and/or software that has to run on top, but simple solution is more attractive. Thank you! P.S: Feel free to ignore "After thoughts".

    Read the article

  • How can I run my program on a large number of computers? [closed]

    - by zenpoy
    I'm looking for a (preferably free) service for running an executable I wrote? It's not malicious, it's not a virus, it's not scam, and if this is really important I can upload the python source code instead. I wrote a small crawler to gather information regarding the style of web pages for my MA project, and I need a lot more data. EDIT Here is more information on my problem and how I approach on solving it, and where I'm stuck. As part of my research I'm trying to classify text based on it's style (font-family for now), my data is based web pages, so I wrote a client/server application - the client is a crawler that gathers this data and send it to the server. The problem is that like 99% of the internet is Arial, Verdana and Helvetica - other fonts are far more rare, so I need to spend very long time to gather enough data regarding these fonts. Hope this explains it.

    Read the article

  • disk write cache buffer and separate power supply

    - by HugoRune
    Windows has a setting to turn off the write-cache buffer (see image) Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. Is it feasible and economical to get such a "separate power supply" for the internal sata drives of a non-server PC? Under what name is such a power supply sold? I know that there are UPS devices that can be connected to external drives,but what is required to be able to switch this setting safely on for an internal disk? The setting has different descriptions in different version of windows Windows XP: Enable write caching on the disk This setting enables write caching in Windows to improve disk performance, but a power outage or equipment failure might result in data loss or corruption. Windows Server 2003: Enable write caching on the disk Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows Vista: Enable advanced performance Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows 7 and 8: Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. This article by Raymond Chen has some more detailed information about what the setting does.

    Read the article

  • What happens to missed writes after a zpool clear?

    - by Kevin
    I am trying to understand ZFS' behaviour under a specific condition, but the documentation is not very explicit about this so I'm left guessing. Suppose we have a zpool with redundancy. Take the following sequence of events: A problem arises in the connection between device D and the server. This causes a large number of failures and ZFS therefore faults the device, putting the pool in degraded state. While the pool is in degraded state, the pool is mutated (data is written and/or changed.) The connectivity issue is physically repaired such that device D is reliable again. Knowing that most data on D is valid, and not wanting to stress the pool with a resilver needlessly, the admin instead runs zpool clear pool D. This is indicated by Oracle's documentation as the appropriate action where the fault was due to a transient problem that has been corrected. I've read that zpool clear only clears the error counter, and restores the device to online status. However, this is a bit troubling, because if that's all it does, it will leave the pool in an inconsistent state! This is because mutations in step 2 will not have been successfully written to D. Instead, D will reflect the state of the pool prior to the connectivity failure. This is of course not the normative state for a zpool and could lead to hard data loss upon failure of another device - however, the pool status will not reflect this issue! I would at least assume based on ZFS' robust integrity mechanisms that an attempt to read the mutated data from D would catch the mistakes and repair them. However, this raises two problems: Reads are not guaranteed to hit all mutations unless a scrub is done; and Once ZFS does hit the mutated data, it (I'm guessing) might fault the drive again because it would appear to ZFS to be corrupting data, since it doesn't remember the previous write failures. Theoretically, ZFS could circumvent this problem by keeping track of mutations that occur during a degraded state, and writing them back to D when it's cleared. For some reason I suspect that's not what happens, though. I'm hoping someone with intimate knowledge of ZFS can shed some light on this aspect.

    Read the article

  • What directories I must backup for reinstalling Windows 7?

    - by gsc-frank
    I'm reinstalling a Windows 7 PC and want backup all the system, application and users data to latter decide what data can be useful. What directories I must backup? I will format the PC and all importante data must be saved. I have doubts particularly about C:\Users\USER_NAME\AppData and how save it using a Ubuntu live CD in case I don't have access to the Windows 7 PC. AppData have a lot of syslink inside!

    Read the article

  • Are there any USB flash drives or SD cards which use RAID or redundant storage for additional reliability?

    - by Luke Dennis
    I'm looking to get a fault-tolerant USB flash drive, which saves data to multiple independent locations, whether using RAID or some other means to back up data. Has a product like this ever been created, or are my only options to hack something together? (By the way: I'm aware that RAID doesn't prevent data corruption from software or the file system. I'm just looking for something that can handle one of the memory sticks going dead.)

    Read the article

  • Is there a piece of software which lets you turn "anything" into a dynamic reorderable list?

    - by Robin Green
    I could write this myself, but I want to know if it already exists. Basically, it must fulfill these criteria: To reorder items, the user must never have to manually renumber them. That would be annoying, and it doesn't scale. Can read from a range of data sources (e.g. a database, a directory on the file system, a text file, another list) When the original data source changes, the list must automatically change with it (possibly with confirmation, e.g. if a list item would be deleted) Ability to persist the list ordering in some fashion Graphical display of list items (so that they can include e.g. images) Optional extras: Ability to modify data and write back to the data source (other than the ordering information)

    Read the article

  • How do you know where macports installs python packages to?

    - by xmaslist
    I am running macports to install scipy and such on OS X leopard with python 2.7. The install runs successfully, but running python and trying to import the packages I've installed, they're not found. What I'm running is: sudo python_select python27 sudo port install py27-wxpython py27-numpy py27-matplotlib sudo port install py27-scipy py27-ipython Opening up python in interactive mode (it is the correct version of python), I type 'import scipy' and get a module not found error. What gives? How can I find out where it is installing the packages to instead?

    Read the article

  • Partition falsly recognized as RAW

    - by Paul Hiemstra
    On my 2 TB data disk I have two primary partitions, one of 1.6 TB for data storage in Linux (ext3) and one of 300 GB for some additional data storage for Windows. I run a dual-boot Windows 7/Ubuntu 12.04 install. The issue I have that if I start my computer into Windows 7, bot the partitions on my 2TB data drive are not recognized. In stead, Windows 7 sees one 1TB partition with type RAW. However, if I reboot to Linux, and then back to Windows 7, the partitions are correctly recognized. The following two screenshots illustrate my situation. Before I reboot to linux: and after the reboot: I have two questions: What could cause this behavior? How can I solve this issue.

    Read the article

  • people_dl_import shows millions of records

    - by amit lohogaonkar
    We have a situation now on prod in sharepoint 2007 based intranet platform and it shows thousands of records under people_dl_import category with format spsimport://?$$dl$$/domain1/domain2/domain3/ Also import was not stopping and added millions of records in database and was on verge of disk full. On other servers like dev we have very less data in this category and format is also like spsimport://doaminname?$$dl$$?... which is good and has only 6000 rows and in prod its 2 millions Crawled under people_dl_import category. I need to know the cause of this garbage data and how to fix it. I tried resetting content source and I will do full import in this weekend to see if this garbage data gets cleared. Any idea on cause for thiss issue?

    Read the article

  • Is there a time machine equivalent for windows that can back up network files?

    - by Jim Thio
    This question is similar to Does an equivalent of Time Machine exist for Windows?, with one difference: The files I want to back up are on a network drive. The computer on that network drive is running Windows XP. I want to back up data on Windows 7. How would I do so? I'd like something similar to Mac OS X' time machine. So copy of data every hour, day, week. Then thinning out, data gets deleted automatically as time goes by. For example, the data for last day is kept as hourly snapshots. For last week, as daily snapshots every day. And for last month as weekly snapshots. How can I achieve this?

    Read the article

  • VMware Workstation 7&8&9 does not generate /etc/vmware/network upon installation

    - by dash17291
    When I install VMware Workstation on Arch linux Virtual ethernet is not working. $ sudo tail /var/log/vnetlib Aug 28 22:20:33 VNLFileExists - Cannot check for file or directory: /etc/vmware/networking , error: No such file or directory Aug 28 22:20:33 VNLNetCfgLoad - Import file does not exist Aug 28 22:20:33 VNL_Load - Error loading the vnet configuration, file used: /etc/vmware/networking Aug 28 22:20:33 VNLNetCfgUnload - Requested cache is not loaded Database file is not present. Failed to initialize Aug 28 22:20:41 VNLFileExists - Cannot check for file or directory: /etc/vmware/networking , error: No such file or directory Aug 28 22:20:41 VNLNetCfgLoad - Import file does not exist Aug 28 22:20:41 VNL_Load - Error loading the vnet configuration, file used: /etc/vmware/networking Aug 28 22:20:41 VNLNetCfgUnload - Requested cache is not loaded Required modules compiled. Previously I have copied that file or directory (I don't remember) from a working installation, but now I need a real solution. It's strange for me, may be a hardware issue also because with Ubuntu the same thing happens on the same computer.

    Read the article

  • Linux mdadm software RAID 6 - does it support bit corruption recovery?

    - by user101203
    Wikipedia says "RAID 2 is the only standard RAID level, other than some implementations of RAID 6, which can automatically recover accurate data from single-bit corruption in data." Does anyone know if the RAID 6 mdadm implementation in Linux is one such implementation that can automatically detect and recover from single-bit data corruption. This pertains to CentOS / Red Hat 6 if those are different from other versions. I tried searching online but didn't have much luck. With SATA error rates being 1 in 1E14 bits, and a 2TB SATA disk containing 1.6E13 bits, this is especially relevant to preventing data corruption. Thanks!

    Read the article

  • How do I set default group ownership for files in a directory?

    - by tnichols
    I am running a cakephp webapp on Linode LAMP. I am finding that my temp files are created with root:root ownership. But the webapp is running with Apache's permissions (www-data). This causes warnings any time there is a new file created because it is not writable for user www-data. How do I change the default ownership to www-data on any new files created in the temp folder? Thanks for your help!

    Read the article

  • Only allow root to change filesystem

    - by Uejji
    The VPS I manage uses a simple hard link rsync archive daily backup system saved to a loop file. This is great, because each backup only takes up as much space as what has changed each day, and all user/group permissions are kept. I would like to give users direct access to their home directories in each backup, but I'm worried about intentional or accidental backup data destruction, as how it stands now users can actually change, destroy or add to backed up data they originally owned. I've been looking for a way to mount this filesystem similar to an ro mount option, but something that would still allow rw access to root, but I've had absolutely no luck. In other words, I want users to be able to view and copy their backed up data without actually being able to change it, and have that data maintain the original permissions. I've got no real preferences as far as filesystem, as long as it's a standard unix filesystem that can preserve permissions, support hard links and deny write access to users without actually stripping the w permission from everything.

    Read the article

  • Portforwarding Combine Several Ports

    - by kiraitachi
    Hi I got a Raspberry Pi at A.A.A.B in my local network and I have set up a DMZ on my router so that any incoming traffic that comes to my router gets redirected to my raspberry pi wich I can connect via NO-IP adress. The problem is that I want to set up portforwarding since i got several services running on my Pi like SSH, torrent webgui, webalbum, etc. I had this already done before long time ago, but I forgot a bit the syntax and cant get to set it up. Router Help says: The Application allows you to do port forwarding, but only have the ports open when data flowing out of the trigger ports. When a program sends data out on outgoing ports called trigger ports, the device then allows incoming data on the open ports specified in your port triggering configuration. 1.Trigger Port Start Trigger Port Start Specify the start port on the device that would trigger the device to open ports for incoming data. 2.Trigger Port End Specify the end port on the device that would trigger the device to open ports for incoming data. You can enter a port number the same as the trigger port start or enter a larger port number to specify a port range. 3.Trigger Traffic Protocol Type Select the trigger traffic type. Open Port Specify all the ports to be opened. It's content could be: A single port only. A port range only. Start open port number and end port number should be separated by "-" . Combined several single port and several port ranges. Each single port or port range should be separated by "," . Open Traffic Protocol Type Select the open traffic type. This are the fields: http://es.tinypic.com/view.php?pic=n5lv1k&s=8 I think this is the syntax 1-7999,8001-9090,9092-65535. But each time I want to add it gives me an error. Any ideas?

    Read the article

  • mysqldump --where with = operator doesn't get all rows = - Help!

    - by JonathanLIVE
    I have a situation with a particular table that now thinks it contains 4 Petabytes of data. I know that sounds cool, but I assure you, it is only on a 60GB partition. This table has 9 fields in it. One of them is a domain_id field. It is the best field to identify the rows by, as there are only approximately 6300 of them. The only other field option to match has over 2million records, and thats just more difficult. I cannot do a straight mysqldump because it will attempt to output all 4PB of data and fill the drive long before it gets close to that, so I need to surgically remove the good stuff, destroy the db, and recreate it. I believe if I can do a dump for each domain_id record, then I will get most of the usable data out of it. This is what I am trying to use: mysqldump -u root --skip-opt -q --no-create-info --skip-add-drop-table --max_allowed_packet=1000000000 database table --where="domain_id=10" domains10.sql Using this I expect every row with the domain_id 10 to be exported. However, when I check the export, I am only getting 1 row, when however I look at the db, there are many many rows. It is as though the operator just finds one, then gives up. I have tried various operators. Using the < or I am able to get more of the data, but the export stops short at certain rows where the data has been compromised. With over 6000 to go through, I can't narrow down which rows are being affected in the export easily enough. So, what I need is an operator that will basically do what I thought = would do, simply give me an export of all records that match the specific field. Also note, the only way I got this DB even accessible is through an innodb force recovery 3. So I need to get this right, because after this is done, I have to drop the db in order to make mysql functional again. Looking forward to any helpful answers.

    Read the article

  • Addig a second samba server to windows domain

    - by Eric
    Hi, I'm trying to add a second samba server (stand alone) to our windows domain, managed by a Samba server, but we've had some problems, we see the server and the shares, but cannot access the shares. We decided to start with minimal configuration. [global] netbios name = GINGER wins server = 192.168.0.2 workgroup = DOMAIN1 os level = 20 security = share passdb backend = tdbsam preferred master = no domain master = no [data] comment = Data path = /home/data guest only = Yes Again trying to access the share gives permissions error. Thanks,

    Read the article

  • Is there any way to do "mail server parking"?

    - by percyboy
    I am managing a mail server, which will be temporarily closed for three or four days due to the data center maintenance. I want to find a solution to (completely or partly) solve the lost mails during this unavailable period. Because the data volume is huge, it is very hard to migrate it to other data center. One approach I think out is to setup a temporary mail server in other data center, and when new mail received, the mail server automatically sends a return mail to tell the sender "We are temporarily closed for three or four days. Please send the mail later or contact in other means." I am wondering is this approach possible with existed mail server ? Or something better available ? (free solution is preferred for it is only for temporary)

    Read the article

  • How do I remotely run a Powershell workflow that uses a custom module?

    - by drawsmcgraw
    I have a custom Powershell module that I wrote for various tasks. Now I want to craft a workflow whose activities will use commands from the module. Here's my test workflow: workflow New-TestWorkflow{ InlineScript { Import-Module custom.ps1 New-CommandFromTheModule } } Then I run the workflow with: New-TestWorkflow -PSComputerName remoteComputer When I do this, the import fails because it can't find the module. I imagine this is because the workflow is executing on the remote machine, where my module does not exist. I can see myself running this across many machines so I'd really rather not have to install this module and maintain it on all of the machines. Is there some way to have my module in a central place and use it in workflows?

    Read the article

  • Generate a use a Openssl certificate in Tomcat

    - by Safari
    I need to enable SSL on my Tomcat and Apache so I need to generate the (self-signed) certificate using Openssl tool end, about Tomcat, I need to import the certificate using keytool. I know that is necessary to convert (openssl) certificate to Tomcat compatible format. So I need to Use OpenSSL to convert the certificate into an PKCS12 keystore an I need to Import this keystore using keytool and export as Tomcat compatible keystore. But I not understood how can I convert a my certificate (generated with Openssl) into a requested Tomcat format? is possible to explain me all the steps to reach my goal? thanks

    Read the article

  • Manage Large E-Book Archive

    - by Cnkt
    I have a very large e-book archive (approx. 1TB) including various file formats eg. PDF, DJVU, MOBI and EPUB. I put them in different folders by subject eg. Engineering, Programming etc. But after many years, things are going crazy. The programming folder itself is 220GB and file names are cryptic. Some filenames are well defined like: 236659889_Final_Report_of_2012_Climate_Change_Conference.pdf but some filenames are just ISBN numbers or just download.pdf. I need an application for organizing and searching my e-books. I already tried Calibre, Mendeley and Debenu. But all these apps try to import files first and I dont have any spare 1+ TB for the apps import folder. Is there any good Windows application for just indexing filenames and contents of ebooks without importing them?

    Read the article

  • Excel Single column into rows, VBA script insight

    - by Sanityvoid
    Okay, so much similiar to the below link but mine is a bit different. Paginate Rows into Columns in Excel I have a lot of data in column A, I want to take every 14 to 15 rows and make them a new row with multiple columns. I'm trying to get it into a format where SQL can intake the data. I figured the best way was to get them into rows then make a CSV with the data. So it would like like below: (wow, the format totally didn't stick when posting) column A column B C D etc 1 1 2 3 x 2 16 17 a b 3 x y z 15 16 17 a b c I can clarify if needed, but I'm stumped on how to get the data out of the single column with so many rows in the column. Thanks for the help!!!

    Read the article

  • Network Performance issue

    - by qubemarker
    We have three Ubuntu 10.04 servers. One server is a storage server and the other two servers are configured as clients. The storage server has a good amount of capacity and it is integrated with windows Active directory server for Authentication. I am uploading some video files from both clients to the server and when I am uploading data from any one client alone I get about 26 MB/s data transfer rate. When I upload data from both the clients simultaneously I am only getting about 8 MB/s from each client. I have gigabit ethernet cards in all of the servers and a L2 Managed gigabit switch for connectivity. I don’t know why the data transfer rate is decreasing so much in simultaneous read and write. I have tried all of the TCP stack related settings suggested here. Can any assist with getting better read/write performance out of this setup? Any help is appreciated.

    Read the article

  • Sharing RAM resources between 2 or more computers

    - by davee44
    I know there was a somewhat similar question before: How to share CPU or RAM? But, let me just specify it a little more... When Microsoft Windows requires more RAM capacity than available it uses a swap-file to temporarily store the data there, this is actually something like a hard-drive-based RAM. This technology is used for many years. Theoretically, it shouldn't be too hard to implement a similar technology that uses the RAM of different computer(s) in the network for temporary data storage. This just requires a software that runs on computers in the network that accepts and returns data from/to the main computer and keep that data in the RAM; plus the operation system of the main computer must have the ability to use computers in the network instead of (or in addition to) the swap-file. I wonder, are there any implementations of this idea? This would allow users to build RAM clusters using all of their home or office computers, that will boost the performance of a single computer for some development/gaming/video tasks, etc.

    Read the article

< Previous Page | 746 747 748 749 750 751 752 753 754 755 756 757  | Next Page >