Search Results

Search found 59041 results on 2362 pages for 'data replication'.

Page 748/2362 | < Previous Page | 744 745 746 747 748 749 750 751 752 753 754 755  | Next Page >

  • Apache mod_wsgi elegant clustering method

    - by Dr I
    I'm currently trying to build a scalable infrastructure for my Python webservers. Actually, I'm trying to find the most elegant way to build a scalable cluster to host all my Python WebServices. For now, I'm using three servers like this: 1 x PuppetMaster to deploy my servers. 2 x Apache Reverse Proxy Front-end servers. 1 x Apache HTTPd Server which host the Python WSGI Applications and binded to using mod_wsgi. 4 x MongoDB Clustered server. Everything is OK concerning the Reverse proxy and the DB Backend, I'm able to easily add a new Reverse Proxy and a new DB Node, but my problem is about the Python WebServer. I thinked to just provision a new node with exactly the same configuration and a rsync replication between the two nodes, but It's not really usefull in term of deployement for my developpers etc. So if you have a solution which is as efficient and elegant that the Tomcat Cluster I'll be really happy to ear it ;-)

    Read the article

  • Dynamically changing one-node Cassandra cluster to two nodes

    - by Jason Axelson
    So I have an application that will be very dormant most of the time but will need high-bursting a few days out of the month. Since we are deploying on EC2 I would like to keep only one Cassandra server up most of the time and then on burst days I want to bring one more server up (with more RAM and CPU than the first) to help serve the load. What is the best way to do this? Should I take a different approach? Some notes about what I plan to do: Bring the node up and repair it immediately After the burst time is over decommission the powerful node Use the always-on server as the seed node My main question is how to get the nodes to share all the data since I want a replication factor of 2 (so both nodes have all the data) but that won't work while there is only one server. Should I bring up 2 extra servers instead of just one?

    Read the article

  • Backup Solr home

    - by user226188
    I'm new to Solr: I've successfully installed Tomcat and Solr 4.3.1 webapp, and two collections on a CentOS 6.4 machine. Now, my server is in production and I need to make backups of solr. So, I would like to know what is the best way to backup solr... For the moment I'm dooing: stop tomcat = tar of my solr home = start tomcat, but I've read that is not a good solution? Moreover, this implie to stop all the tomcat which have other webapps than solr. I've also heard that there is a script named "backup" in solr home bin's folder ? but my bin folder is empty :( I don't want to make an another slave server with replication, for me it's not a backup solution because my backup are supposed to be send to a bacula backup server all nights. There is no builtin solution that I can work around to make a script ? like a mysqldump for Mysql servers. Thanks for help !

    Read the article

  • Configure redis to not have everything in memory?

    - by acidzombie24
    I like redis because it lets me do operations on data structures. I wanted to see what would happen if i were to put more data into redis then i have for ram. So i wrote a loop that inserted 30k bytes repeatedly and set maxmemory 100MB. I figure it would stay at 100mb. It kept growing. Past 1gb then past 2gb. Suddenly it crashed because i ran the 32bit version. Now... i dont understand what the point of maxmemory is? I am using the windows version so maybe its ignored. Does redis have to have everything in memory? If i have a site (on linux) with a 10gb database and 512mb on the machine will redis work? I don't need it to be amazingly fast i just prefer to modify data in it then sql (although i hope it is still faster then mysql)

    Read the article

  • How to setup a simple Ubuntu Server Tomcat cluster on VirtualBox for testing?

    - by Alex Pakka
    I am looking for a step by step instructions to setup at leat two (and later more) simple Ubuntu Virtual Core 12.10 Server VMs on Oracle VirtualBox under Windows 7 64bit. The test setup would be: Apache HTTP server on the Windows host acting as a Load Balancer. The result will be that going to http://localhost:8080 would balance between two nodes and prove session replication. Two lean, small footprint Ubuntu Server guest nodes with Java 7 and Tomcat 7. The intention is to help everyone doing High Availability / Load Balancing development and testing to create a reasonable environment on the local workstation or mainstream notebook in as little time as possible.

    Read the article

  • EXT4 external hard drive for use with multiple systems

    - by EXTdumb
    I recently bought a external hard drive to store some data on. I use Linux but I am not a power user. If I format the drive to EXT4, is it possible for the permissions to ever screw up and I lost access to my data? I will be plugging the drive into several different linux based computers at work and I frequently hop distros on my main home machine. I need to make sure I don't lose any data because I overlooked something. I am not familiar with EXT 3 or 4. So far I have done this : Formatted drive to EXT4 ran gksudo thunar and changed the permissions to my user account and all settings to read/write Wrote all the files I need to the drive I really appreciate any help.

    Read the article

  • Raid-3 like software backup tool

    - by Chronial
    I have a lot of data (about 7 TB), stored across multiple hard-drives with varying sizes. I would like to have a backup of that data to be safe against drive failure. A RAID is not a good option for me, as I want to keep my cost low and be able to easily extend the storage capacity of my setup by buying an additional HD. I remember seeing a piece of software that generates parity data over all drives and stores that on an extra drive. That solution protects the setup from hard drive failure and works with varying drive sizes (as long as the parity drive is the biggest one). But I can’t seem to find that software again. Does anybody now what I’m talking about or have any other solution for my situation?

    Read the article

  • check_postgres_checkpoint plugin error

    - by Iliyas
    I am using the check_postgres.pl plugin for Nagios. I am trying to monitor how long since the last checkpoint has been run using the check_postgres_checkpoint option. When I run the command from CLI as root I am getting the output but I am not able to get the output in the Nagios web interface. The error which it shows is, ERROR: pg_controldata could not read the given data directory: "/opt/PostgreSQL/9.1/data" It is trying to access the pg_control file in the 'global' directory present beneath the data directory which has only read access to the postgres user. Can anyone please suggest me how this can be resolved ? Thanks.

    Read the article

  • people_dl_import shows millions of records

    - by amit lohogaonkar
    We have a situation now on prod in sharepoint 2007 based intranet platform and it shows thousands of records under people_dl_import category with format spsimport://?$$dl$$/domain1/domain2/domain3/ Also import was not stopping and added millions of records in database and was on verge of disk full. On other servers like dev we have very less data in this category and format is also like spsimport://doaminname?$$dl$$?... which is good and has only 6000 rows and in prod its 2 millions Crawled under people_dl_import category. I need to know the cause of this garbage data and how to fix it. I tried resetting content source and I will do full import in this weekend to see if this garbage data gets cleared. Any idea on cause for thiss issue?

    Read the article

  • Start TLS and 389 Directory

    - by Kyle Flavin
    I'm trying to configure Start TLS on 389 Directory server, but I'm having all sorts of issues. I've been following this doc: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Directory_Server/9.0/html/Administration_Guide/managing-certs.html which specifies that I should create a certificate for both the directory server and admin server. I've imported the CA cert on both servers. I've tried to use the same server certificate for both. It will not allow me to do so. However, the admin and directory servers reside on the same host. If I generate a new certificate it will need to use the same hostname. I'm not sure if that's valid... Has anyone out there set this up before? Any direction would be helpful. I have multmaster replication set up. From an external client, I'm attempting to do an ldapsearch -ZZ -x -h "myhost" -b "dc=example,dc=com" -D "cn=Directory Manager" -W "", and I'm getting a protocol error.

    Read the article

  • Mysql Encryption and Key managment

    - by microchasm
    I am developing a local intranet system in PHP/MySQL to manage our client data. It seems that the best practice would be to encrypt the sensitive data on the MYSQL server as it is being entered. I am not clear, though, on what would be the best way to do this while still having the data readily accessible. It seems like a tough question to answer: where is the key(s) stored? How to best protect the key? If the key is stored on each users' machine, how to protect it if the machine is exploited? If the key is exploited, how to change the key? If the key is to be stored in the db, how to protect it there? How would users access it? If anyone could point me in the right direction, or give some tips I'd be very grateful. Thanks.

    Read the article

  • Does installing windows format the hard disk?

    - by Jason
    My Google search for ""does installing windows format the hard disk" returns: No results found for "does installing windows format the hard disk". I was quite surprised. I'm hoping to get a quick answer here. Does an install format the hard disk, and destroy all data, including non-os/s data? -Or do you specifically have to say "format" at some point so you know you are losing everything? [I tried to go to SP3, but it doesn't work on my computer. My SP2 disk is fired. I only have a SP1 disk, with a seperate SP2 package. I can't get to Safe Mode to uninstall SP3 ("Windows XP Setup cann run under Safemode. Setup will restart now.). I don't want running the SP1 disk to destroy any non-o/s data.] Thanks.

    Read the article

  • How to repair a Veritas tape that has been overwritten a bit?

    - by Ismo Utriainen
    I meant to restore some files, but I forgot that there was a monday backup job just waiting for tape loading. So veritas 10d started to write over my tape and that valuable data is now gone. The original data size was about 40 GB and that accidentally started job wrote about 30 MB to the begin of tape. What are my possibilities to recover some data from that tape? Update: Inventory and catalog doesn't help, media settings are overwrite, not append. It is a DLT drive.

    Read the article

  • Multiple iSCSI Targets or 1 that's shared?

    - by Joost Verdaasdonk
    On my network I have several types of files I want to save on a SAN like: SQL db's and logs Exchange data Random files Now I'm wondering if I should create one iSCSI Target with a large volume and initiate that from one of the servers. (and share it so other servers can use it too) Or I should create separate Targets to have each server use its own storage. For the record the storage could be separated because the servers aren't using the shared data. For one reason I was thinking of one storage is ease of backup. (but perhaps performance could be a problem?) What would be an advisable configuration for these type of data?

    Read the article

  • NAS device for distributed team

    - by user5959
    We are a distributed team spread across 5 locations. We have a shared drive (1 TB data) at our former location that we are currently accessing via Hamachi VPN. Our shared drive is a network folder on a Windows Server located at one of our locations. The current connection speed is terrible. The upload speed at the current location of the shared drive is very slow. We looking for a NAS device that we can host at another location with better upload speed that all of us can access. I am looking for a NAS device that has these features: Minimal Maintenance as we do not have dedicated IT resources Access data on the device from multiple locations. Ability to create network drive (On Windows Computers Map Network Drive) Upload data from random client computers without having to install software. (Right now, we use LogMeIn Rescue's file manager) Ability handle slow or dropped connections when transferring files (Maximum size 1.5 GB)

    Read the article

  • Creating a list based on a column

    - by MikkoP
    I need to create a dropdown list in sheet A based on the values in sheet B in column A. I clicked on the A column in B sheet and named it as Models. Then I clicked on the cell in sheet A where I wanted the list to be and selected Data -> Data validation -> Data validation. In the Settings page I selected List in the Allow section, checked Ignore blank and In-cell dropdown. In the Source section I inserted =Models. This way I get all the right values plus a lot of blank values. How do I prevent the blank lines from appearing in the list?

    Read the article

  • Exchange 2010 and DAG - all roles on both servers?

    - by Keith
    We just recently migrated to an Exchange 2010 server. Currently all of the roles and mailboxes are installed on 1 server (we are a small company with less than 100 users). I am wanting to use DAG for replication however it seems most set ups for DAG requires at least 3 or 4 total servers. Is there anyway to make this work with just two servers and both of these servers would have all the roles and mailboxes? Maybe there is a better way to do this than DAG? I'm open for suggestions. The goal here is to have some sort of replicated server so that if there is an issue with our primary Exchange server, another one can be brought up within an hour or so with all current information (not a backup). It doesn't necessarily have to be instantaneous.

    Read the article

  • RAID10 Without BBU, With UPS

    - by Richard
    My datacenter says that each rack has primary and backup power on each rack. I assume this means there is a UPS for each server. Therefore, do I have any need of getting a BBU for the following setup? Intel Cherry 520 SSD x 4 RAID 10 LSI-9260 with WRITEBACK CACHE ENABLED I have heard that without a BBU the data in the cache could be lost. Since my needs aren't mission-critical, I can afford to lose some data. But would the rest of the data on the HD be corrupted?

    Read the article

  • MySql transfer / update (a bit specific)

    - by Jeff
    before posting I was digging whole site but didn't find help for my problem, so I hope someone will help... Facts: 30 Gb mysql database on remote server (about 20.000.000 rows) data are once weekly updated in local network (mysql) I need to transfer/replace local updated database with remote connection is about 2mb (real mb, not mbps) up/down Point is that I can't have 'down time' of remote mysql server. Until now I Tried: navicat data sync - Ok, but take about 3 days to finish dbForge - ok but need 5 days to finish mysql dump transfer to remote server and execution - about day, but a lot of downtime rsync folder with database /mysql/lib/MY_DATABASE - 4 hours, but after that I need to execute always 'repir on remote server' which takes about 2 hours, and a lot of down time mysql dump piped from cl to directly goto server - still now satisfied many problems I could give you more things that I tried... mysql replication - slow Anyase, what is best,best way to: refresh remote mysql on weekly level and in same time to have 0 sec down time nor huge server load If you have any idea please share

    Read the article

  • Regex working in RedHat is not giving any result in Ubuntu

    - by Supratik
    My goal is to match specific files from specific sub directories. I have the following folder structure `-- data |-- a |-- a.txt |-- b |-- b.txt |-- c |-- c.txt |-- d |-- d.txt |-- e |-- e.txt |-- org-1 | |-- a.org | |-- b.org | |-- org.txt | |-- user-0 | | |-- a.txt | | |-- b.txt I am trying to list the files only inside the data directory. I am able to get the correct result using the following command in RHEL find ./testdir/ -iwholename "*/data/[!/].txt" a.txt b.txt c.txt d.txt e.txt If I run the same command in Ubuntu it is not working. Can anyone please tell me why it is not working in Ubuntu ?

    Read the article

  • MySQL - complete server migration (Ubuntu) [closed]

    - by Mr A
    Possible Duplicate: How to copy and move mysql database Dump all databases with SSH access I'm setting up a new dev machine, and I have the old one sitting right next to me. I'd like to do an exact copy of all MySQL structures and data from the old machine to the new. Nothing fancy needs to happen (it's a dev machine). No replication. I don't care about "downtimes" etc. Is there a super simple way to do this? For example, I have SSH on the old server, can I just use Nautilus, do a connect to server, and then transfer a folder over, replacing another folder with it and be done? It's the same version of MySQL on both sides. Same version of Ubuntu. Same in most respects.

    Read the article

  • Lookups targeting merged cells - only returning value for first row

    - by Ian
    I have a master worksheet which contains data that I wish to link to another 'summary' sheet using a lookup. However, some of the cells whose data I wish to include in the summary sheet are merged across two or more adjacent rows. To be clear, the 'primary' column A that I am using in my formula in order to identify the target row does not contain merged cells, but the column from which I wish to return a value does. I have tried VLOOKUP and INDEX+MATCH. The problem is that the data is only returned for the first row's key, and the others return zero (as though the cell in the target column were blank, where actually it is merged). I have tried inelegant ways around this, e.g. using IF statements to try to find the top row of the merged cell. However, these don't work well if the order of values in the summary sheet is different from that in the master sheet, as well as being messy. Can this be done?

    Read the article

  • Amazon EC2 as load balanced/failover solution

    - by sugiggs
    Hi All, I'm thinking of an idea but not sure the pros/cons of it. At the moment, we are hosting our website on a dedicated server. As a failover/load balanced solution, I'm thinking to use Amazon EC2+EBS. The files can be rsync and mysql can be setup as master-master replication When the load is high, I can up the machine, given sometime to "sync" and load balanced the traffic there. is it do-able? any link I can read more on this?

    Read the article

  • Filtering downloading a file

    - by Ozgun Sunal
    people. i know there are several types of firewalls operating at different layers of OSI. ACLs(layer 3 firewalls filter based on port numbers and IP addresses), SPI(which examines the patterns of data at layer 3 and realise that data content is malicious or not) and application layer firewalls which is capable of understanding the data at that level. Considering this, i'll give an example and learn what i need to do. Lets say, we have a computer has access to the Internet. i want to download a file or display a web page from a website but block access to the another website/s or downloading. To do this, i cant block access to the web browser on the 3rd party firewall bcos that will shut down all access. ACLs wont already do it. So, which kind of firewall will make it possible to filter specific traffic and how?

    Read the article

  • Trying to build a history of popular laptop models

    - by John
    A requirement on a software project is it should run on typical business laptops up to X years old. However while given a specific model number I can normally find out when it was sold, I can't find data to do the reverse... for a given year I want to see what model numbers were released/discontinued. We're talking big-name, popular models like Dell Latitude/Precision/Vostro, Thinkpads, HP, etc. The data for any model is out there but getting a timeline is proving hard. Sites like Dell are (unsurprisingly) geared around current products, and even Wikipedia isn't proving very reliable. You'd think this data must have been collated by manufacturers or enthusiasts, surely?

    Read the article

< Previous Page | 744 745 746 747 748 749 750 751 752 753 754 755  | Next Page >