Search Results

Search found 956 results on 39 pages for 'synchronization'.

Page 5/39 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Programmatically syncing with remote servers

    - by Joseph
    My application generates text files that need to be synced with remote servers, which may be windows or linux. Sync has to happen without user's intervention. I tried with rsync but windows doesn't come with rsync by default. Also it is not possible to supply password in the command line for rsync. Currently I'm going with ftp. But that seems like an inefficient way. Is there a way to rsync without user intervention? What are the ways to sync with a remote server programmatically? App is on nodejs.

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Client-Server MMOG & data structures sync when joining / playing

    - by plang
    After reading a few articles on MMOG architecture, there is still one point on which I cannot find much information: it has to do with how you keep in sync server data on the client, when you join, and while you play. A pretty vague question, I agree. Let me refine it: Let's say we have an MMOG virtual world subdivided into geographical cells. A player in a cell is mostly interested in what happens in the cell itself, and all the surrounding cells, not more. When joining the game for the first time, the only thing we can do is send some sort of "database dump" of the interesting cells to the client. When playing, I guess it would be very inefficient to do the same thing regularly. I imagine the best thing to do is to send "deltas" to the client, which would allow keeping the local database in sync. Now let's say the player moves, and arrives in another cell. Surrounding cells change, and for all the new cells the player subscribes, the same technique as used when joining the game has to be used: some sort of "database dump". This mechanic of joining/moving in a cell-based MMOG virtual world interests me, and I was wondering if there were tried and tested techniques in this domain. Thanks!

    Read the article

  • How to sync client and server at the first frame

    - by wheelinlight
    I'm making a game where an authoritative server sends information to all clients about states and positions for objects in a 3d world. The player can control his character by clicking on the screen to set a destination for the character, much like in the Diablo series. I've read most information I can find online about interpolation, reconciliation, and general networking architecture (Valve's for instance). I think I understand everything but one thing seems to be missing in every article I read. Let say we have an interpolation delay of 100ms, server tickrate=50ms, latency=200ms; How do I know when 100ms has past on the client? If the server sends the first update on t=0, can I assume it arrives at t=200, therefore assuming that all packets takes the same amount of time to reach the client? What if the first packet arrives a little quick, for instance at t=150. I would then be starting the client with t=150 and at t=250 it will think it has past 100ms since its connect to the server when it in fact only 50ms has past. Hopefully the above paragraph is understandable. The summarized question would be: How do I know at what tick to start simulating the client? EDIT: This is how I ended up doing it: The client keeps a clock (approximately) in sync with the server. The client then simulates the world at simulationTime = syncedTime - avg(RTT)/2 - interpolationTime The round-trip time can fluctuate so therefore I average it out over time. By only keeping the most recent values when calculating the average I hope to adapt to more permanent changes in latency. It's still to early to draw any conclusion. I'm currently simulating bad network connections, but it's looking good so far. Anyone see any possible problems?

    Read the article

  • Is it time to deprecate synchronized, wait and notify?

    - by OldCurmudgeon
    Is there a single scenario (other than compatibility with ancient JVMs) where using synchronized is preferable to using a Lock? Can anyone justify using wait or notify over the newer systems? Is there any algorithm that must use one of them in its implementation? I see a previous questions that touched on this matter but I would like to take this a little further and actually deprecate them. There are far too many traps and pitfalls and caveats with them that have been ironed out with the new facilities. I just feel it may soon be time to mark them obsolete.

    Read the article

  • Path of Replication

    - by geeko
    I'm currently developing a replication system to keep data in-synch between an arbitrary number of servers. Some of these servers exist in one cluster on one LAN. Others exist somewhere else in the world. I'm wondering what are the pros/cons of different paths that we choose to flow replicated data on between servers? In other words, what are the different strategies to load balance the replication process ?

    Read the article

  • How to store and update data table on client side (iOS MMO)

    - by farseer2012
    Currently i'm developing an iOS MMO game with cocos2d-x, that game depends on many data tables(excel file) given by the designers. These tables contain data like how much gold/crystal will be cost when upgrade a building(barracks, laboratory etc..). We have about 10 tables, each have about 50 rows of data. My question is how to store those tables on client side and how to update them once they have been modified on server side? My opinion: use Sqlite to store data on client side, the server will parse the excel files and send the data to client with JSON format, then the client parse the JOSN string and save it to Sqlite file. Is there any better method? I find that some game stores csv files on client side, how do they update the files? Could server send a whole file directly to client?

    Read the article

  • How to synchronize the ball in a network pong game?

    - by Thaars
    I’m developing a multiplayer network pong game, my first game ever. The current state is, I’ve running the physic engine with the same configurations on the server and the clients. The own paddle movement is predicted and get just confirmed by the authoritative server. Is a difference detected between them, I correct the position at the client by interpolation. The opponent paddle is also interpolated 200ms to 100ms in the past, because the server is broadcasting snapshots every 100ms to each client. So far it works very well, but now I have to simulate the ball and have a problem to understanding the procedure. I’ve read Valve’s (and many other) articles about fast-paced multiplayer several times and understood their approach. Maybe I can compare my ball with their bullets, but their advantage is, the bullets are not visible. When I have to display the ball, and see my paddle in the present, the opponent in the past and the server is somewhere between it, how can I synchronize the ball over all instances and ensure, that it got ever hit by the paddle even if the paddle is fast moving? Currently my ball’s position is simply set by a server update, so it can happen, that the ball bounces back, even if the paddle is some pixel away (because of a delayed server position). Until now I’ve got no synced clock over all instances. I’m sending a client step index with each update to the server. If the server did his job, he sends the snapshot with the last step index of each client back to the clients. Now I’m looking for the stored position at the returned step index and compare them. Do I need a common clock to sync the ball? EDIT: I've tried to sync a common clock for the server and all clients with a timestamp. But I think it's better to use an own stepping instead of a timestamp (so I don't need to calculate with the ping and so on - and the timestamp will never be exact). The physics are running 60 times per second and now I use this for keeping them synchronized. Is that a good way? When the ball gets calculated by each client, the angle after bouncing can differ because of the different position of the paddles (the opponent is 200ms in the past). When the server is sending his ball position, velocity and angle (because he knows the position of each paddle and is authoritative), the ball could be in a very different position because of the different angles after bouncing (because the clients receive the server data after 100ms). How is it possible to interpolate such a huge difference? I posted this question some days ago at stackoverflow, but got no answer yet. Maybe this is the better place for this question.

    Read the article

  • Syncing objects to a remote server, and caching on local storage

    - by Harry
    What's the best method of sycing objects (as JSON) to a remote server, with local caching? I have some objects that will pretty much just be plain-text with some extra meta-data. I was thinking of perhaps including a "last modified date" for both Local storage and Remote storage. This could then be used to determine which object is the most recent. For example, even though objects will be saved to both local and remote when they are saved, sometimes the user may not have internet access, or the server may be down, or any other number of things. In this case, the last modified date for remote storage would be reverted to its previous date. Local storage would remain as it is. At this point, the user could exit the application, and when they reload the application would then look at the last modified dates of the local and remote storages, and decide. Is there anything I'm missing with this? Is there a better method that I could use?

    Read the article

  • Solutions for iOS collaborative sync (iCloud CoreData, CouchDB)?

    - by mluisbrown
    I'm developing an iOS app where one of the features will be allowing users to share and collaborate on data (e.g. lists). From everything I've read and based on the way that iCloud CoreData sync works I assume that it would not be a good fit for the following reasons, but I wanted to make sure I wasn't missing anything, as I'd prefer not to use a 3rd party syncing solution if at all possible: iCloud sync of any kind (CoreData, Document or Key / Value pairs) can only ever be between devices that use the same iCloud account, so it's designed for a single user syncing data over multiple devices. Any kind of collaborative sync (several people editing the same document / list) simultaneously would be limited to everyone have the same iCloud account. Cases of people sharing the same iCloud account is usually limited to, for example, husband and wife or similar close relationships for a small number of people. iCloud Core Data sync is for ensuring that each sync'd device has the same data. It doesn't seem to allow syncing just a subset of the data, so scenarios in which each user has their own documents and is only sharing / collaborating on a subset of them are not supported. And I'm not even mentioning the well document problems with iCloud CoreData syncing which may or may not have been resolved with iOS 7. Given the above, it would seem that CouchDB (with TouchDB) would be a better option, as it seems to support everything I need. What other options are there that people can recommend?

    Read the article

  • Barrier implementation with mutex and condition variable

    - by kkp
    I would like to implement a barrier using mutex locks and conditional variables. Please let me know whether my implementation below is fine or not? static int counter = 0; static int Gen = 999; void* thread_run(void*) { pthread_mutex_lock(&lock); int g = Gen; if (++counter == nThreads) { counter = 0; Gen++; pthread_cond_broadcast(&cond_var); } else { while (Gen == g) pthread_cond_wait(&cond_var, &lock); } pthread_mutex_unlock(&lock); return NULL; }

    Read the article

  • Write a program using 3 threads, one prints 10 'A's and the second prints 'B's and the third prints 10 'C's with synchrornization

    - by user132967
    Iam try to implement this questions using threads and mutex this is my code : include include include include include define Num_thread 3 pthread_mutex_t lett[Num_thread]; void Sleep_rand(double max) { struct timespec delai; delai.tv_sec=max; delai.tv_nsec=0; nanosleep(&delai,NULL); } void *Print_Sequence(); int main() { int i; pthread_t tid[Num_thread];// this is threads identifier for(i=0;i<Num_thread;i++) pthread_mutex_init(&lett[i],0); for(i=0;i<Num_thread;i++) { printf("i=%d\n",i); /* create the threads / pthread_create(&tid[i], / This variable will have the thread is after successful creation / NULL, / send the thread attributes / Print_Sequence, / the function the thread will run / &i/ send the parameter's address to the function */); } /* Wait till threads are complete and join before main continues */ for (i = 0; i pthread_join(tid[i], NULL); } return 0; } /* The thread will begin control in this function */ void Print_Sequence(void param) { int i,j=(int)param; printf("j=%d\n",(*j)); int max; pthread_mutex_lock(&lett[0]); pthread_mutex_lock(&lett[1]); for (i = 1; i <= 10; i++) { max=(int) (8*rand()/(RAND_MAX+1.0)); Sleep_rand( max); printf("A"); } pthread_mutex_unlock(&lett[0]); pthread_mutex_lock(&lett[2]); for (i = 1; i <= 10; i++) { max=(int) (2*rand()/(RAND_MAX+1.0)); Sleep_rand( max); printf("B"); } for (i = 1; i <= 10; i++) { max=(int) (15*rand()/(RAND_MAX+1.0)); Sleep_rand( max); printf("C"); } pthread_mutex_unlock(&lett[1]); pthread_mutex_unlock(&lett[2]); pthread_exit(0); } and the o/p is like : AAAAAAAAAABBBBBBBBBBCCCCCCCCCCAAAAAAAAAABBBBBBBBBBCCCCCCCCCCAAAAAAAAAABBBBBBBBBBCCCCCCCCCC COULD ANYONE PLEASE EXPLAIN WHAT IS THE WRONG WITH CODE ??

    Read the article

  • How do I duplicate a Box2d simulation, mid-simulation?

    - by Whyte
    I want to serialize the state mid-game, send it over the network to an identical computer (same CPU, same OS, same binary), load it there, and have the two games run in tandem doing the exact same simulation, without one of them drifting off and going haywire. In short: I want pop-in, pop-out networking support on my HIGHLY physics-intensive game, where sending object coordinates every few seconds is impossible, due to having thousands of objects, and many clients. I tried this with Box2D, and saving an object's location/velocity/etc wasn't enough... there's internal state that's not accessible through any public methods. My current workaround is to force EVERY client to save its entire worldstate and reload it from scratch, whenever a new player connects... but this is obviously bad practice, because it hangs the game for everyone whenever someone new connects. However, it works, with zero desynchronization. So, anyone know of any other techniques that can help me? Or should I just kiss my project goodbye?

    Read the article

  • Does Dropbox use a cronjob to sync up?

    - by Yko
    I was looking into lipsync on a Dropbox clone. I was looking at the diagram on how it works here. It shows that a cronjob is used to keep files in sync between client and server. Does that mean that every sec/min/hour, a cronjob runs and checks to see if there is a difference between client and server? Is that how Dropbox does it? If it does use a cronjob, what happens when you are in the middle of syncing and another cronjob runs? Does rsync (or additional libs) know how to handle this?

    Read the article

  • Synchronize Azure SQL (cloud) with Azure SQL Emulator (local)?

    - by Sid
    We have an Azure service (web role) that heavily depends on the database. For offline development/testing, we'd like to have the app+db run offline within the emulators. Running the webrole itself within the emulator is straightforward but doing so for the Azure SQL storage isn't so. What is the simplest way to ensure that the cloud Azure SQL database and the emulator/local Azure SQL database are in sync? We can afford some level of staleness for simplicity of sync operation (meaning it's ok for the local copy to be a few hours stale versus mirroring every write as soon as it happens) Thanks

    Read the article

  • what libraries or platforms should I use to build web apps that provide real-time, asynchronous data

    - by Daniel Sterling
    This is a less a question with a simple, practical answer and more a question to foster discussion on the real-time data exchange topic. I'll begin with an example: Google Wave is, at its core, a real-time asynchronous data synchronization engine. Wave supports (or plans to support) concurrent (real-time) document collaboration, disconnected (offline) document editing, conflict resolution, document history and playback with attribution, and server federation. A core part of Wave is the Operational Transformation engine: http://www.waveprotocol.org/whitepapers/operational-transform The OT engine manages document state. Changes between clients are merged and each client has a sane and consistent view of the document at all times; the final document is eventually consistent between all connected clients. My question is: is this system abstract or general enough to be used as a library or generic framework upon which to build web apps that synchronize real-time, asynchronous state in each client? Is the Wave protocol directly used by any current web applications (besides Google's client)? Would it make sense to directly use it for generic state synchronization in a web app? What other existing libraries or frameworks would you consider using when building such a web app? How much code in such an app might be domain-specific logic vs generic state synchronization logic? Or, put another way, how leaky might the state synchronization abstractions be? Comments and discussion welcomed!

    Read the article

  • Achieving Thread-Safety

    - by Smasher
    Question How can I make sure my application is thread-safe? Are their any common practices, testing methods, things to avoid, things to look for? Background I'm currently developing a server application that performs a number of background tasks in different threads and communicates with clients using Indy (using another bunch of automatically generated threads for the communication). Since the application should be highly availabe, a program crash is a very bad thing and I want to make sure that the application is thread-safe. No matter what, from time to time I discover a piece of code that throws an exception that never occured before and in most cases I realize that it is some kind of synchronization bug, where I forgot to synchronize my objects properly. Hence my question concerning best practices, testing of thread-safety and things like that. mghie: Thanks for the answer! I should perhaps be a little bit more precise. Just to be clear, I know about the principles of multithreading, I use synchronization (monitors) throughout my program and I know how to differentiate threading problems from other implementation problems. But nevertheless, I keep forgetting to add proper synchronization from time to time. Just to give an example, I used the RTL sort function in my code. Looked something like FKeyList.Sort (CompareKeysFunc); Turns out, that I had to synchronize FKeyList while sorting. It just don't came to my mind when initially writing that simple line of code. It's these thins I wanna talk about. What are the places where one easily forgets to add synchronization code? How do YOU make sure that you added sync code in all important places?

    Read the article

  • IIS Configuration Synchronization for Web Server Farm?

    - by Nate Bross
    I'm wondering if there is any good/easy way to get the IIS configurations synchronized? I'm going to be setting up a pair of IIS Servers with Network Load Balancing. I can get the data files (html, etc) synchronized all fine and well, but I'll be adding new Websites fairly often and I'd like to avoid doing the IIS configuration on multiple servers.

    Read the article

  • secure synchronization of large amount of data

    - by goncalopp
    I need to automatically mirror a large amount (terabytes) of files in two unix machines over a slow link (1 Mbps). This needs to be done frequently, but the data doesn't change too much (delta transmission doesn't saturate the link). The usual solution would be rsync, but there's an additional requirement: it's undesirable, from a security standpoint, that either the source or destination machines have (keyless) ssh keys to each other, or any kind of filesystem access. All communication between the two machines should thus be initialized (and mediated) through a third machine. I've asked a separate question about rsync in particular here. Are there other obvious solutions I'm missing?

    Read the article

  • active directory servers synchronization

    - by Mit Naik
    I have 3 AD servers with windows server 2008 R2 at 3 different places, main server is at datacenter and 2 are in our local office which are at 2 different place. I want to synchornize all the 3 server together, were datacenter server should be central server and rest 2 servers should synch with the datacenter server. Please provide us the steps or tutorial to do this. Also we want that once the changes are done in 1 of the AD server the changes are automatically done in all the servers. For example if I change the password of user in our local server it should be updated in our main AD server and other branch server too. Please provide us the steps or tutorial to do this asap. I have one more question I have already created main datacenter AD as domain.local and other domains as xyz.local and abc.local, how can I replicate the additional AD domains with main datacenter DC, also do we require VPN connection, is there any other way to replicate the servers without using VPN connection?

    Read the article

  • Synchronization of volume snapshots when doing whole system backups

    - by intuited
    Is there a way to guarantee consistency across volumes when doing backups from LVM snapshots? Consider this scenario: Some system upgrade is in progress. It will write some files to the /usr volume, and once completed, will record success in the /var volume. As the upgrade is just about complete, I run a backup script that creates snapshots of the /usr and /var volumes, along with the rest of the system's volumes, and proceeds to create backups from those snapshots. Just before the upgrade's last write/flush on the /usr volume completes, the backup script takes its snapshot of /usr. That write completes, and the upgrade operation's success is quickly recorded in the nebulous depths of /var. The backup script takes a snapshot of /var. The backup script creates backups from the snapshots it has, er, snapshotted. So the result of all of this tomfoolery is that the resulting /usr backup contains a file which is missing a few bits, and the /var backup contains metadata indicating that that file is complete and approved for use. Without delving into the details of which operating systems' system upgrade systems would be unfazed by such trifles, is there a way to avoid such problems? At the least this seems like it could cause some application to fail unexpectedly after restoration of such a backup.

    Read the article

  • How to switch off Outlook Synchronization Manager

    - by mr.nothing
    I have one strange thing which is happening to my outlook. It tries to synchronize something (seems it do this every time send/recieve triggers). Does anybody know how to switch this manager off? Here is the icon of the sync manager in the task bar (left one) https://www.sugarsync.com/pf/D9085558_69411843_95701 Thanks in advance! p.s. it's also annoying me that something gone wrong and sync failed. Also every time it tries to sync something internet explorer says me that there are too many temporary internet files and asking me to expand space for its storing. https://www.sugarsync.com/pf/D9085558_69411843_95781 Please, help! This makes me crazy.

    Read the article

  • FTP Synchronization software for Mac or PC

    - by evanmcd
    Hi, I've been using FTP Synchronizer for awhile and generally have had pretty good results with it. But, I've just moved to a Mac full-time (at work as well as home now) so want to get a native client if I can. I've tried the only one that I've found - SuperFlexibleSynchronizer - but it crashed every time I loaded up an FTP to FTP synch attempt. The most important features to me are: 1) ability to synch with a large number of files (thousands), as I generally work on sites with large number of files. 2) FTP to FTP synch. This would be very helpful as I work with some CMS based sites for which users upload files while on staging and don't want to move files locally first before moving live. Thanks! Evan

    Read the article

  • Wiki/CMS with synchronization?

    - by Clinton Blackmore
    We're looking into putting up a wiki or CMS for internal use by our IT department. One of the big things we want to use it for is disaster recovery procedures. Given that a disaster, such as a power or network outage, might render the wiki inaccessible, it seems sensible to to host the wiki in two places so that if one is inaccessible, we can fall back to the other. Are there any wikis or CMSes that synchronize (or an alternate way to achieve a similar end)?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >