Search Results

Search found 5793 results on 232 pages for 'ftp sync'.

Page 97/232 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Transfer files using ssh

    - by zozo
    Good day to all. I am using ssh (winSCP) to transfer some files from a server to my workstation. The problem is that at some files I get disconnected. Always same files. I am the owner of the directory so I guess the file permissions is not a problem. (Also I set the permissions to 777). Is there a size limit or something like that? Thank you for your time. Protocol is SFTP, server is 32bit machine. Files are 100MB tops. Added: Worked with filezilla using ftp. This temporarily fix the problem but is not exactly a solution since maybe next time I won't have root access to create a ftp account

    Read the article

  • I need a few minutes of dedicated server a week, but not for hosting, just to convert ogg etc.

    - by talkingnews
    I'm completely happy with my webhosting, it's just that I need to do one little thing they won't allow, and that's run an instance of Sox to convert about 30 mp3s to ogg files, in various directories, a couple of times a week, to be done automatically in response to the detection of the upload of an mp3. Probably looking at a minute of server time over the whole week. I've had unhelpful suggestions on other forums like "why not leave your home PC on 24 hours a day and then use all your isp bandwidth to do this", which doesn't work for me. I know that I can host files on, say, Amazon S3, but is there something similar for my needs? All it would need to do would be: wget/ftp the mp3 files, convert them to ogg, ftp the files back to my hosting. Of course, all this wouldn't be needed if there was such a thing as a compiled binary of Sox (or any mp3ogg converter) for Centos which I could upload without needing root access, but I've given up asking that one, but always open to suggestions!

    Read the article

  • Configuring vsftpd with nginx on ubuntu

    - by arby
    I have vsftpd installed on Ubuntu 12.04LTS along with nginx, php, and sql on an Amazon ec2 instance. The web server is good to go, but I'm having trouble connecting to the FTP server. I'm not quite sure how to set the privileges or what configuration options I might be missing. By default, the location of the web root is at /usr/share/nginx/www and it is owned by root:root. The web server runs as user www-data in the group www-data. I've opened port 21 and set the passive ports in the ec2 backend and ufw firewall. In vsftpd.conf, I have: ... anonymous_enable=NO local_enable=YES local_umask=0027 chroot_local_user=YES pasv_enable=YES pas_max_port=12100 pasv_min_port=12000 port_enable=YES ... Now, I'm unsure how to create the FTP user that when I login, displays my web directory with write access. I've tried it a few different ways, but I keep running into errors (either no connection, no write access, or very slow timeouts.)

    Read the article

  • Invalid operation dist-ugprade

    - by drdarwin
    i'm running Apache 2 on Debian at my VPS. Naturally i have problem with restricted GD-library of my php package and i need to fix it (i need imagerotate() function). Before installing php-gd pugin it's necessary to update php 5.2 to php 5.3 my /etc/apt/sources.list is: #deb http://ftp.ru.debian.org/debian/ lenny main contrib non-free #deb http://security.debian.org lenny/updates main contrib non-free #deb http://ftp.ru.debian.org/debian lenny main #deb-src http://volatile.debian.org/debian-volatile lenny/volatile main contrib deb http://packages.dotdeb.org stable all deb-src http://packages.dotdeb.org stable all The problem comes after apt-get dist-ugprade executing: /$ apt-get update Hit http://packages.dotdeb.org stable Release.gpg Hit http://packages.dotdeb.org stable Release Ign http://packages.dotdeb.org stable/all Packages/DiffIndex Ign http://packages.dotdeb.org stable/all Sources/DiffIndex Hit http://packages.dotdeb.org stable/all Packages Hit http://packages.dotdeb.org stable/all Sources Reading package lists... /$ apt-get dist-ugprade E: Invalid operation dist-ugprade What can cause this problem? How much should i wait while Reading package lists...? Is there any simple guideline for further php-gd installation?

    Read the article

  • SSL certificate for FTPS, is it the same as for HTTPS?

    - by BlackTigerX
    This question is about "FTP over SSL", if I understand correctly FTPS and HTTPS are just the standard FTP and HTTP protocols running on top of SSL, is this correct? The actual question is: is the certificate that you use for FTPS the exact same that you can use for HTTPS? or are there any differences? To give you some context, I need to get a certificate for an FTPS server, I know I can generate one but it needs to be from an certificate authority, I just need to make sure that I can use the same type of certificate that we use here for HTTPS, otherwise need to know what type of certificate I need to get

    Read the article

  • pure-ftpd: one readonly/non-deletable file in home directory

    - by Bram Schoenmakers
    Is there a way to have a file in the user's FTP home directory without the ability to modify/remove it from that directory over FTP? So the user has write permissions on his own home folder, thus the ability to remove files. An exception should be made for a single file, which has the same filename and contents for each account. The solution I'm thinking of right now to run a periodic script to check the presence of that file, and if not, put it back. But I wonder whether there's a better solution than this.

    Read the article

  • Perforce Proxy Server: Caching selective files [closed]

    - by fbrereto
    I just set up a Perforce proxy server for work. I'm noticing the cache directory is filling up very quickly -- with files I know I will never need. For example, there is a 'sandbox' directory in the depot where users keep personal branches and other work; a p4 sync is causing the p4 proxy cache to grab these user's sandboxes when I'll never need them. I would create a symbolic link for the sandbox directory to /dev/null but then I wouldn't be caching my sandbox, which I am interested in. Is there any way to tell the perforce proxy something to the effect of "if I haven't had to sync it, please don't cache it?"

    Read the article

  • Simple one-way synchronisation of user password list between servers

    - by Renaud Bompuis
    Using a RedHat-derivative distro (CentOS), I'd like to keep the list of regular users (UID over 500), and group (and shadow files) pushed to a backup server. The sync is only one-way, from the main server to the backup server. I don't really want to have to deal with LDAP or NIS. All I need is a simple script that can be run nightly to keep the backup server updated. The main server can SSH into the backup system. Any suggestion? Edit: Thanks for the suggestions so far but I think I didn't make myself clear enough. I'm only looking at synchronising normal users whose UID is on or above 500. System/service users (with UID below 500) may be different on both system. So you can't just sync the whole files I'm afraid.

    Read the article

  • how to limit upload bandwidth per user in linux?

    - by Gihan Lasita
    Can anyone provide the tc command to limit upload bandwidth per user in Debian Lenny? I found that to mark packets per user with iptables I can use the following command iptables -t mangle -A OUTPUT -p tcp -m owner --uid-owner testuser -j MARK --set-mark 500 but I have no idea how to use tc update by running following commands, i managed to limit testuser upload bandwidth to 10Mbit iptables -t mangle -N HTB_OUT iptables -t mangle -I POSTROUTING -j HTB_OUT iptables -t mangle -A HTB_OUT -j MARK --set-mark 30 iptables -t mangle -A HTB_OUT -m owner --uid-owner testuser -j MARK --set-mark 10 tc qdisc replace dev eth0 root handle 1: htb default 30 tc class replace dev eth0 parent 1: classid 1:1 htb rate 10Mbit burst 5k tc class replace dev eth0 parent 1:1 classid 1:10 htb rate 10Mbit ceil 10Mbit tc qdisc replace dev eth0 parent 1:10 handle 10: sfq perturb 10 tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 10 fw flowid 1:10 now the problem is, i do not want to limit testuser's FTP bandwidth but by running above commands FTP speed also limited to 10Mbit. Regards

    Read the article

  • File creation time on Windows vs Linux

    - by Sergei
    We have following setup: mountserver - debian linux fileserver1 - Windows 2008 R2 Storage server fileserver2 - Celerra NS20 exporting CIFS share workstation - windows 7 with mapped drive to share on fileserver2 What we are doing: mounted share from fileserver1 on mountserver, e.g. /shared/fileserver1 mounted share from fileserver2 on mountserver, e.g. /shared/fileserver2 ran rsync on mountserver to sync data from fileserver1 to fileserver2.Used atime as parameter to sync data not older than X after a while tried to delete data older that Y on /shared/fileserver2. From what I see, linux stat command on mountserver returns following when quering file on /shared/fileserver2: At the same time when I open property for the same file using mapped drive connected to fileserver2,I see following for the same file: As you can see, Created date of 12 August shown in Windows Explorer is nowhere to be seen using stat command Am I missing something here?

    Read the article

  • Simple mdadm RAID 1 not activating spare

    - by Nick Liu
    I had created two 2TB HDD partitions (/dev/sdb1 and /dev/sdc1) in a RAID 1 array called /dev/md0 using mdadm on Ubuntu 12.04 LTS Precise Pangolin. The command sudo mdadm --detail /dev/md0 used to indicate both drives as active sync. Then, for testing, I failed /dev/sdb1, removed it, then added it again with the command sudo mdadm /dev/md0 --add /dev/sdb1 watch cat /proc/mdstat showed a progress bar of the array rebuilding, but I wouldn't spend hours watching it, so I assumed that the software knew what it was doing. After the progress bar was no longer showing, cat /proc/mdstat displays: md0 : active raid1 sdb1[2](S) sdc1[1] 1953511288 blocks super 1.2 [2/1] [U_] And sudo mdadm --detail /dev/md0 shows: /dev/md0: Version : 1.2 Creation Time : Sun May 27 11:26:05 2012 Raid Level : raid1 Array Size : 1953511288 (1863.01 GiB 2000.40 GB) Used Dev Size : 1953511288 (1863.01 GiB 2000.40 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Mon May 28 11:16:49 2012 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : Deltique:0 (local to host Deltique) UUID : 49733c26:dd5f67b5:13741fb7:c568bd04 Events : 32365 Number Major Minor RaidDevice State 1 8 33 0 active sync /dev/sdc1 1 0 0 1 removed 2 8 17 - spare /dev/sdb1 I've been told that mdadm automatically replaces removed drives with spares, but /dev/sdb1 isn't being moved into the expected position, RaidDevice 1. UPDATE (30 May 2012): A badblocks destructive read-write test of the entire /dev/sdb yielded no errors as expected; both HDDs are new. As of the latest edit, I assembled the array with this command: sudo mdadm --assemble --force --no-degraded /dev/md0 /dev/sdb1 /dev/sdc1 The output was: mdadm: /dev/md0 has been started with 1 drive (out of 2) and 1 rebuilding. Rebuilding looks like it's progressing normally: md0 : active raid1 sdc1[1] sdb1[2] 1953511288 blocks super 1.2 [2/1] [U_] [>....................] recovery = 0.6% (13261504/1953511288) finish=2299.7min speed=14060K/sec unused devices: <none> I'm now waiting on this rebuild, but I'm expecting /dev/sdb1 to become a spare just like the five or six times that I've tried rebuilding before. UPDATE (31 May 2012): Yeah, it's still a spare. Ugh! UPDATE (01 June 2012): I'm trying Adrian Kelly's suggested command: sudo mdadm --assemble --update=resync /dev/md0 /dev/sdb1 /dev/sdc1 Waiting on the rebuild now... My questions are: Why isn't the spare drive becoming active sync? How can I make the spare drive become active?

    Read the article

  • Mysql Master-ColdMaster

    - by enedebe
    I explain my case: I'm at Amazon AWS and I want to be fault tolerant on a entire region failure. My basic problem is to have the db in sync with 2 regions. My options: Master-Master (high lag) Hand made sync every 5 minutes Master-ColdMaster?! (copy on the fly but Master won't wait the other region commit) In my system we could afford loosing a piece of data (we're not a bank) the last inserts in the db, but we could not afford more than 10 minutes of downtime. The database is small and the level of inserts is low, and I wouldn't affect the normal usage waiting other region commit. Is the 3 solution posible? And the most important, once the primary fail how we can detect and change the rol between master-coldmaster -- coldmaster-master ? Is there any clean-mode to restore between failure? Thank's!

    Read the article

  • How can I connect to a CIFS/SMB share on a non-default port?

    - by fsckin
    I'm trying to get a contractor connected to a CIFS share (port 445). He's not a big shop (so no go on using VPN). His ISP blocks outgoing connections on port 445. I've been doing some rsync to ftp madness as a workaround to have the share available to him, but it's getting out of control -- we're syncing nearly 40GB a day to an external ftp site and it's going to be much easier just to have him connect and only grab the stuff he needs. So... I can have the CIFS share open to the internet (filtered to allow access to his IP only) on port 446. How the heck can he connect to that? I looked through "net use" and didn't see anything about using another port.

    Read the article

  • Cannot install grub to RAID1 (md0)

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat This is the my mdadm output after sync: /dev/md0: Version : 0.90 Creation Time : Wed Feb 17 16:18:25 2010 Raid Level : raid1 Array Size : 470455360 (448.66 GiB 481.75 GB) Used Dev Size : 470455360 (448.66 GiB 481.75 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Nov 1 15:19:31 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 92e6ff4e:ed3ab4bf:fee5eb6c:d9b9cb11 Events : 0.11049560 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. This is my fdisk -l output: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057d19 Device Boot Start End Blocks Id System /dev/sda1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sda2 940910985 976768064 17928540 5 Extended /dev/sda5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table This is my grub install output: root@answe:~# grub-install /dev/sda /usr/sbin/grub-setup: warn: Attempting to install GRUB to a disk with multiple partition labels or both partition label and filesystem. This is not supported yet.. /usr/sbin/grub-setup: error: embedding is not possible, but this is required for cross-disk install. root@answe:~# grub-install /dev/sdb Installation finished. No error reported. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... Anybody can resolve this issue? I have big headache with this.

    Read the article

  • ESXI ftpput fails Syntax problem

    - by Datapimp23
    I'm trying to ftpput my virtual machines dirs to our NAS. Which doesn't support NFS. Only FTP and samba. So I'm in the ESXi console and enter the followin command ftpput ipaddress /vmfs/volumes/4a1157e1-be81171a-1b39-001d09080124/VMNAME /Backup /Backup is a public share on the nas, I can access it through any ftp client. After I enter I get the following ftpput: can't open 'Backup': No such file or directory I'm kind of in the dark here. Any suggestions?

    Read the article

  • Running rsync on network connect

    - by user40495
    I have one mac which is always on and is my main computer. I also have a MacBook and I'm trying to Sync my iphoto library. So I can successfully use rsync to sync files. I'm using a cron to have it run once a day. In reality the macbook isn't always on, so I'm looking for a way to run rsync when ever the two computers are connected on the same wifi network. So I'm guessing the best place is to somehow run rsync when the airport is connected. Whats the best way

    Read the article

  • Running rsync on network connect

    I have one mac which is always on and is my main computer. I also have a MacBook and I'm trying to Sync my iphoto library. So I can successfully use rsync to sync files. I'm using a cron to have it run once a day. In reality the macbook isn't always on, so I'm looking for a way to run rsync when ever the two computers are connected on the same wifi network. So I'm guessing the best place is to somehow run rsync when the airport is connected. Whats the best way

    Read the article

  • external pop email relay

    - by Pixman
    I want to offer to my customer this possibility : get her pop3 emails from external pop3 server forward the news emails to the new external pop3 server I have find lot of tools for sync imap accounts, or sync pop to imap, but i just want get pop and send to another email adress ! I search a answer for linux ( if i can make a simple daemon for make it's it's good ). Thanks a lot for your help. edit for more detail : For simplify my question, in my use case, it's just want to connect as client via pop protocol ( like a mail app ). And i check news emails, and forward to other email adress. I search about an app or code for create this on linux. In this situation have no access to mailbox dirs, or server configuration ( in this case i have already the answer by create a qmail hook ) Maybe, it's not the good website ? my question must be post on the stackoverflow part ?

    Read the article

  • How do I protect large file downloads through PHP and/or Apache?

    - by Eric
    We have some large files (1-8GB) that are not publicly accessible. Currently we're serving them up through a PHP script that buffers the files in 1MB chunks and writes it to the output. It's incredibly CPU intensive and slows the server down when only a few downloads are active. We want to move the file transfer work to Apache or a more efficient method. We are using cookie authentication. FTP downloads are out unless there's some way to authenticate FTP sessions through the existing PHP session cookie. Ideally we'd like something where we can use PHP to hide the link to the file while it passes off the file transfer work to Apache, which is no doubt far more efficient at HTTP file transfers than PHP. We want to be able to resume downloads as well. Any help is appreciated.

    Read the article

  • WSUS is not using Akamai CDN for syncronisation source

    - by Geekman
    I've just installed a WSUS onto our network, and I'm currently doing the initial sync. I've found that WSUS does not seem to be talking to an Akamai cache, but rather with MS directly. This is contrary to what I've always thought regarding Windows Update traffic. Tcpdump of our WSUS server doing initial sync... As you can see it's speaking with 65.55.194.221. For me to speak to this IP, I have to go over international transit links. Which is of course not ideal. 8:42:31.279757 IP 65.55.194.221.https > XXXX.XXXX.XXXX.XXXX.50888: Flags [.], seq 4379374:4380834, ack 289611, win 256, length 1460 18:42:31.279759 IP 65.55.194.221.https > XXXX.XXXX.XXXX.XXXX.50888: Flags [.], seq 4380834:4382294, ack 289611, win 256, length 1460 18:42:31.279762 IP 65.55.194.221.https > XXXX.XXXX.XXXX.XXXX.50888: Flags [.], seq 4382294:4383754, ack 289611, win 256, length 1460 18:42:31.279764 IP 65.55.194.221.https > XXXX.XXXX.XXXX.XXXX.50888: Flags [P.], seq 4383754:4384144, ack 289611, win 256, length 390 18:42:31.279793 IP XXXX.XXXX.XXXX.XXXX.50888 > 65.55.194.221.https: Flags [.], ack 4369154, win 23884, length 0 18:42:31.279888 IP XXXX.XXXX.XXXX.XXXX.50888 > 65.55.194.221.https: Flags [.], ack 4377914, win 23884, length 0 18:42:31.280015 IP XXXX.XXXX.XXXX.XXXX.50888 > 65.55.194.221.https: Flags [.], ack 4384144, win 23884, length 0 And yet, if I ping download.windowsupdate.com it seems to resolve to a local (national) Akamai node, just fine: root@some-node:~# ping download.windowsupdate.com PING a26.ms.akamai.net (210.9.88.48) 56(84) bytes of data. 64 bytes from a210-9-88-48.deploy.akamaitechnologies.com (210.9.88.48): icmp_req=1 ttl=59 time=1.02 ms 64 bytes from a210-9-88-48.deploy.akamaitechnologies.com (210.9.88.48): icmp_req=2 ttl=59 time=1.10 ms Why is this? And how can I change that (if possible)? I know that I can manually specify a WSUS source to sync with instead of pick the default MS Update like I currently have... But it seems like I shouldn't have to do this. NOTE: I've haven't confirmed if a WUA speaks with Akamai, just looking at WSUS as all WUAs will use our internal WSUS from now on. We'll be looking to join an IX shortly with the hopes of peering with an Akamai cache and have very fast access to Windows Updates. Before I let this drive my motivations for an IX at all I want to first confirm it's actually possible for WSUS to speak with an Akamai cache. I know this is somewhat networking related, but I feel like it has more to do with WSUS than anything, so someone who knows WSUS better than me will likely be able to figure this out.

    Read the article

  • Apt-Get "sources.list needs at least 1 source-URI" error when building dependecies for Xen

    - by Entity_Razer
    What i'm trying to do is install Xen in a test environment, now I am trying to run the: apt-get build-dep xen-3.3 command, but it keep throwing a error which literally translated from dutch (installed the debian OS in Dutch) say's: E: your sourcelist (/etc/apt/sources.list) has to contain at least 1 source-URI I've googled it but I can't seem to find a definitive solid answer on how to fix this. By default a source-URI (read man page of apt-get) states it needs to be something along the lines of deb ftp://ftp.debian.org/debian stable contrib Now I've got 2 HTTP sources (default Debian ones) up & running so far and they've been working flawlessly for the better part of a few days now. Only now its starting to act up. Anyone able to help me out ? Much obliged !

    Read the article

  • How to setup an iTunes library to use between two Macs?

    - by stead1984
    As you can tell this is no where near work related. I have an iMac G5 where my itunes is currently hosted, I have also just got a new MacBook Pro. What I want to be able to do is sync my itunes library from my iMac to my MacBook Pro, that way it can be accessible away from my home network, then if I make any changes to the itunes library (like change a track name) it will sync these changes back once I connect back to the home network. My current itunes contains music, videos, podcasts, playlists and iPhone apps, I would also like iTunes to track play counts collectively between the iMac and the MacBook Pro.

    Read the article

  • Which is better for multi-use auth, MySQL, PostgreSQL, or LDAP?

    - by Fearless
    I want to set up an Oracle Linux 6 server that gives users secure IMAP email (with dovecot), Jabber IM, FTP (with vsftpd), and calDav. However, I want each user logon to be able to authenticate all services (e.g. Joe Smith signs up once for a username and password that he can use for email, ftp, and his calendar). My question is, which database service will be best suited for that application? Also, is there a way to link the database with the preexisting server shell logins (e.g. so I can read the root account's LogCheck emails on a different device)?

    Read the article

  • How to scale out OpenStreetMap data efficiently

    - by Pierre
    For over a year now, I'm running an in-house PostGIS server filled with OSM data, used for both Mapnik-based tile generation and Nominatim-based geocoding, updated with day replicates. This works pretty well. However, as usage is growing exponentially, I would like to achieve better reliability and performance by adding additional PostgreSQL servers. And I'm kind of lost. Since PostgreSQL doesn't seem to handle replication by itself, I would think about using a piede of middleware like PgPool-II to keep the servers in sync. But I'm afraid it would be nothing but necessary for this usage : very high read-to-write ratio, where all writes are done at the same exact time every day. My questions are simple : What would you do to keep these servers in sync? And, what is done for this at the OpenStreetMap Foundation, MapQuest, Mapbox or CloudMade? Thanks.

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >