Search Results

Search found 1706 results on 69 pages for 'distributed'.

Page 13/69 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Recommended integration mechanism for bi-directional, authenticated, encrypted connection in C clien

    - by rcampbell
    Let me first give an example. Imagine you have a single server running a JVM application. This server keeps a collection of N equations, once for each client: Client #1: 2x Client #2: 1 + y Client #3: z/4 This server includes an HTTP interface so that random visitors can type https://www.acme.com/client/3 int their browsers and see the latest evaluated result of z/4. The tricky part is that either the client or the server may change the variable value at any time, informing the other party immediately. More specifically, Client #3 - a C app - can initially tell the server that z = 20. An hour later that same client informs the server that z = 23. Likewise the server can later inform the client that z = 28. As caf pointed out in the comments, there can be a race condition when values are changed by the client and server simultaneously. The solution would be for both client and server to send the operation performed in their message, which would need to be executed by the other party. To keep things simple, let's limit the operations to (commutative) addition, allowing us to disregard message ordering. For example, the client seeds the server with z = 20: server:z=20, client:z=20 server sends {+3} message (so z=23 locally) & client sends {-2} message (so z=18 locally) at the exact same time server receives {-2} message at some point, adds to his local copy so z=21 client receives {+3} message at some point, adds to his local copy so z=21 As long as all messages are eventually evaluated by both parties, the correct answer will eventually be given to the users of the client and server since we limited ourselves to commutative operations (addition of 3 and -2). This does mean that both client and server can be returning incorrect answers in the time it takes for messages to be exchanged and processed. While undesirable, I believe this is unavoidable. Some possible implementations of this idea include: Open an encrypted, always on TCP socket connection for communication Pros: no additional infrastructure needed, client and server know immediately if there is a problem (disconnect) with the other party, fairly straightforward (except the the encryption), native support from both JVM and C platforms Cons: pretty low-level so you end up writing a lot yourself (protocol, delivery verification, retry-on-failure logic), probably have a lot of firewall headaches during client app installation Asynchronous messaging (ex: ActiveMQ) Pros: transactional, both C & Java integration, free up the client and server apps from needing retry logic or delivery verification, pretty straightforward encryption, easy extensibility via message filters/routers/etc Cons: need additional infrastructure (message server) which must never fail, Database or file system as asynchronous integration point Same pros/cons as above but messier RESTful Web Service Pros: simple, possible reuse of the server's existing REST API, SSL figures out the encryption problem for you (maybe use RSA key a la GitHub for authentication?) Cons: Client now needs to run a C HTTP REST server w/SSL, client and server need retry logic. Axis2 has both a Java and C version, but you may be limited to SOAP. What other techniques should I be evaluating? What real world experiences have you had with these mechanisms? Which do you recommend for this problem and why?

    Read the article

  • Connecting to an RMI object without registry

    - by Mark Probst
    I think I need to connect to a remote RMI object without going through the registry, but I don't know how. My situation is this: I'm implementing a simple job distribution service which consists of one distributor and multiple workers. The distributor has a registered RMI object to which clients connect to send jobs, and workers connect to accept jobs. Unfortunately the distributor and worker hosts are behind a firewall. To get to the distributor host I am tunneling two ports (one for the registry, one for the distributor object) via SSH, so I can get to the registry and the distributor from outside the firewall. To make that work I have to set "-Djava.rmi.server.hostname=localhost" on the distributor JVM so that the clients connect to their local, tunneled port, instead of the port on the actual distributor host, which is blocked. This creates a problem for the workers, though, because they need to connect to the distributor directly, but because of the "localhost" redirection they behave like clients and try to connect to a port on their own host, which is not available, because I'm not tunneling on the workers (it is impractical). Now, if I could connect to a remote object directly by giving the hostname and port, I could do away both with the registry on the distributor and the "localhost" hack, and make the workers connect properly. How do I do that? Or is there a different solution to this problem?

    Read the article

  • Decentralized synchronized secure data storage

    - by Alberich
    Introduction Hi, I am going to ask a question which seems utopic for me, but I need to know if there is a way to achieve what I need. And if not, I need to know why not. The idea Suppose I have a database structure, in MySql. I want to create some solution to allow anyone (no matter who, no matter where) to have a synchronized copy (updated clone) of this database (with its content) Well, and it is not going to be just one synchronized copy, it could (and should) be a multiple replication (supposing the basic, this means, for example, ten copies all over the world) And, the most important thing: It must be secure. By secure I mean only real-accepted transactions will be synchronized with all the others (no matter how many) database copies/clones. Note: Since it would be quite difficult to make the synchronization in real-time, I will design everything to make this feature dispensable. So it is not required. My auto-suggestion This is how I am thinking to manage it: Time identifiers and Updates checking: Every action (insert, update, delete...) will be stored as the action instruction itself, associated to the time identifier. [I think better than a DATETIME field, it'll be an INT one, with the number of miliseconds passed from 1st january 2013 on, for example]. So each copy is going to ask to the "neighbour copy" for new actions done since last update, and execute them after checking they are allowed. Problem 1: the "neighbour copy" could be outdated too. Solution 1: do not ask just one neighbour, create a random list with some of the copies/clones and ask them for news (I could avoid the list and ask ALL the clones for updates, but this will be inefficient if clones number ascends too much). Problem 2: Real-time global synchronization is not active. What if... Someone at CLONE_ENTERPRISING inserts a row into TABLE. ... this row goes to every clone ... Someone at CLONE_FIXEMALL deletes this row. ... and at the same time, somewhere in an outdated clone ... Someone at CLONE_DROPOUT edits this row (now inexistent at the other clones) Solution 2: easy stuff, force a GLOBAL synchronization before doing any new "depending-on-third-data action" (edit, for example). This global synch. will be unnecessary when making an INSERT, for instance. Note: Well, someone could have some fun, and make the same insert in two clones... since they're not getting updated in real-time, this row will exist twice. But, it's the same as when we have one single database, in some needed cases we check if there is an existing same-row before doing the final action. Not a problem. Problem 3: It is possible to edit the code and do not filter actions, so someone could spread instructions to delete everything, or just make some trolling activity. This is not a problem, since good clones will always be somewhere. Those who got bad won't interest anymore. I really appreciate if you read. I know this is not the perfect solution, it has possibly hundred of holes, but it is my basic start. I will now appreciate anything you can teach me now. Thanks a lot. PS.: It could be that all this I am trying already exists and has its own name. Sorry for asking then (I'd anyway thank this name, if it exists)

    Read the article

  • wanting a good memory + disk caching solution

    - by brofield
    I'm currently storing generated HTML pages in a memcached in-memory cache. This works great, however I am wanting to increase the storage capacity of the cache beyond available memory. What I would really like is: memcached semantics (i.e. not reliable, just a cache) memcached api preferred (but not required) large in-memory first level cache (MRU) huge on-disk second level cache (main) evicted from on-disk cache at maximum storage using LRU or LFU proven implementation In searching for a solution I've found the following solutions but they all miss my marks in some way. Does anyone know of either: other options that I haven't considered a way to make memcachedb do evictions Already considered are: memcachedb best fit but doesn't do evictions: explicitly "not a cache" can't see any way to do evictions (either manual or automatic) tugela cache abandoned, no support don't want to recommend it to customers nmdb doesn't use memcache api new and unproven don't want to recommend it to customers

    Read the article

  • Whats the difference between Paxos and W+R>=N in Cassandra?

    - by user1128016
    Dynamo-like databases (e.g. Cassandra) provide ability to enforce consistency by means of quorum, i.e. a number of synchronously written replicas (W) and a number of replicas to read (R) should be chosen in such a way that W+RN where N is a replication factor. On the other hand, PAXOS-based systems like Zookeeper are also used as a consistent fault-tolerant storage. What is the difference between these two approaches? Does PAXOS provide guarantees that are not provided by W+RN schema?

    Read the article

  • What should i do for accomodating large scale data storage and retrieval?

    - by kailashbuki
    There's two columns in the table inside mysql database. First column contains the fingerprint while the second one contains the list of documents which have that fingerprint. It's much like an inverted index built by search engines. An instance of a record inside the table is shown below; 34 "doc1, doc2, doc45" The number of fingerprints is very large(can range up to trillions). There are basically following operations in the database: inserting/updating the record & retrieving the record accoring to the match in fingerprint. The table definition python snippet is: self.cursor.execute("CREATE TABLE IF NOT EXISTS `fingerprint` (fp BIGINT, documents TEXT)") And the snippet for insert/update operation is: if self.cursor.execute("UPDATE `fingerprint` SET documents=CONCAT(documents,%s) WHERE fp=%s",(","+newDocId, thisFP))== 0L: self.cursor.execute("INSERT INTO `fingerprint` VALUES (%s, %s)", (thisFP,newDocId)) The only bottleneck i have observed so far is the query time in mysql. My whole application is web based. So time is a critical factor. I have also thought of using cassandra but have less knowledge of it. Please suggest me a better way to tackle this problem.

    Read the article

  • How do you export or release a Mac OS X program made in Xcode? Program does not load on other comput

    - by SolidSnake4444
    I made a program in Xcode, being a simple calculator that takes a first number, and a second number, and then either adds,subtracts,multiplies, or divides depending on the radio button. I build and run and the program comes up and works fine. When I went to show my friends on their macs, when you double click on the program the program pops in the tray for like .05 seconds and then disappears and we never can actually run the program. It still works perfect however on my computer. What am I doing wrong? How can I take the program I made, and run it on different macs? I have the release set to 10.5 but the active SDK to 10.6. It runs in both 10.5 and 10.6 simulators. One friend has 10.6.3 like me and the other has 10.5.x(cant remember the last part). To get the app, I changed from debug to release and set active SDK to 10.5. Then in the release folder I found the app and sent that over iChat. I feel this will be a problem in the future if I ever make a legit application to distribute. Thank you!

    Read the article

  • C++ Winsock P2P

    - by Goober
    Scenario Does anyone have any good examples of peer-to-peer (p2p) networking in C++ using Winsock? It's a requirement I have for a client who specifically needs to use this technology (god knows why). I need to determine whether this is feasible. Any help would be greatly appreciated.

    Read the article

  • How to bundle shell script and C code?

    - by eSKay
    I have a C shell script that calls two C programs - one after the another with some file handling before, in-between and afterwards. Now, as such I have three different files - one C shell script and 2 .c files. I need to give this script to other users. The problem is that I have to distribute three files - which the users must keep in the same folder and then execute the script. Is there some better way to do this? [I know I can make one C code file out of those two... but I will still be left with a shell script and a C code. Actually, the two C codes do entirely different things... so I want them to be separate]

    Read the article

  • How to make my Java Swing application a Client-Server application?

    - by Jonas
    I have made a Java Swing application. Now I would like to make it a Client-Server application. All clients should be notified when data on the server is changed, so I'm not looking for a Web Service. The Client-Server application will be run on a single LAN, it's a business application. The Server will contain a database, JavaDB. What technology and library is easiest to start with? Should I implement it from scratch using Sockets, or should I use Java RMI, or maybe JMS? Or are there other alternatives that are easier to start with? And is there any server library that I should use? Is Jetty an alternative?

    Read the article

  • Condor job using DAG with some jobs needing to run the same host

    - by gurney alex
    I have a computation task which is split in several individual program executions, with dependencies. I'm using Condor 7 as task scheduler (with the Vanilla Universe, due do constraints on the programs beyond my reach, so no checkpointing is involved), so DAG looks like a natural solution. However some of the programs need to run on the same host. I could not find a reference on how to do this in the Condor manuals. Example DAG file: JOB A A.condor JOB B B.condor JOB C C.condor JOB D D.condor PARENT A CHILD B C PARENT B C CHILD D I need to express that B and D need to be run on the same computer node, without breaking the parallel execution of B and C. Thanks for your help.

    Read the article

  • Can computer clusters be used for general everyday applications?

    - by Matt Pascoe
    Does anyone know how a computer cluster can be used for everyday applications, like for example video games? I would like to build a computer cluster that can run applications over the cluster that were not specifically designed for computer clusters and still see the performance increase. One use would be for video games, but I would also like to utilize the increased computing power for running a large network of virtualized machines.

    Read the article

  • How to bundle C/C++ code with C-shell-script?

    - by eSKay
    I have a C shell script that calls two C programs - one after the another with some file handling before, in-between and afterwards. Now, as such I have three different files - one C shell script and 2 .c files. I need to give this script to other users. The problem is that I have to distribute three files - which the users must keep in the same folder and then execute the script. Is there some better way to do this? [I know I can make one C code file out of those two... but I will still be left with a shell script and a C code. Actually, the two C codes do entirely different things... so I want them to be separate]

    Read the article

  • Parallelizing a serial algorithm

    - by user643813
    Hej folks, I am working on porting a Text mining/Natural language application from single-core to a Map-Reduce style system. One of the steps involves a while loop similar to this: Queue<Element>; while (!queue.empty()) { Element e = queue.next(); Set<Element> result = calculateResultSet(e); if (!result.empty()) { queue.addAll(result); } } Each iteration depends on the result of the one before (kind of). There is no way of determining the number of iterations this loop will have to perform. Is there a way of parallelizing a serial algorithm such as this one? I am trying to think of a feedback mechanism, that is able to provide its own input, but how would one go about parallelizing it? Thanks for any help/remarks

    Read the article

  • Is a Service Bus a good option to communicate with multiple external servers?

    - by PFreitas
    We are developing an application that communicates, via web services or TCP/IP sockets, with multiple servers (up to 50 different external companies). Basically, the exchanged messages are the same (XML), but depending on the inputs of our application we should call 1 or more external servers. The benefits we would expect of introducing a Service Bus in the architecture would be: 1- Remove the need to manage all point-to-point configurations (all the 50 endpoints); 2- Simplify the communication layer of our application by having only one server to talk to; Is a Service Bus a good architectural option for this scenario? What is the best (simplest) Service Bus for this kind of communication? I read a few MSDN articles on Azure Service Bus Relay, but it didn’t seem to fit our needs. Am I wrong? Thanks for your help.

    Read the article

  • How to bundle C code in C shell script?

    - by eSKay
    I have a C shell script that calls two C programs - one after the another with some file handling before, in-between and afterwards. Now, as such I have three different files - one C shell script and 2 .c files. I need to give this script to other users. The problem is that I have to distribute three files - which the users must keep in the same folder and then execute the script. Is there some better way to do this? [I know I can make one C code file out of those two... but I will still be left with a shell script and a C code. Actually, the two C codes do entirely different things... so I want them to be separate]

    Read the article

  • Split text files Accross threads

    - by Kevin
    The problem: I have a few text files (10) with numbers in them on every line. I need to have them split across some threads I create using the pthread library. these threads that are created (worker threads) are to find the largest prime number that gets sent to them (and over all the largest prime from all of the text files). My current thoughts on solutions: I am thinking myself to have two arrays and all of the text files in one array and the other array will contain a binary file that I can read say 1000 lines and send the pointer to the index of that binary file in a struct that contains the id, file pointer, and file position and let it crank through that. a little bit of what I am talking about pthread_create(&threads[index],NULL,calc_sqrt,(void *)threadFields[index]);//Pass struct to each worker Struct: typedef struct threadFields{ int *id, *position; FILE *Fin; }tField; If anyone has any insight or a better solution it would be greatly appreciated Thanks

    Read the article

  • How do I control output files name and content of an Hadoop streaming job?

    - by Eran Kampf
    Is there a way to control the output filenames of an Hadoop Streaming job? Specifically I would like my job's output files content and name to be organized by the ket the reducer outputs - each file would only contain values for one key and its name would be the key. Update: Just found the answer - Using a Java class that derives from MultipleOutputFormat as the jobs output format allows control of the output file names. http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/mapred/lib/MultipleOutputFormat.htmlhttp://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/mapred/lib/MultipleOutputFormat.html I havent seen any samples for this out there... Can anyone point out to an Hadoop Streaming sample that makes use of a custom output format Java class?

    Read the article

  • Oracle global_names DELETE problem

    - by jyzuz
    I'm using a database link to execute a DELETE statement on another DB, but the DB link name doesn't conform to global naming, and this requirement cannot change. Also I have global_names set to false, and cannot be changed either. When I try to use these links however, I receive: ORA-02069: - global_names parameter must be set to TRUE for this operation Cause: A remote mapping of the statement is required but cannot be achieved because GLOBAL_NAMES should be set to TRUE for it to be achieved. - Action: Issue `ALTER SESSION SET GLOBAL_NAMES = TRUE` (if possible) What is the alternative action when setting global_names=true is not possible? Cheers, Jean

    Read the article

  • Pseudorandom Number Generation with Specific Non-Uniform Distributions

    - by carnun
    Hello all, I'm writing a program that simulates various random walks (with differing distributions). At each timestep, I need randomly generated, two dimensional step distances and angles from the distribution of the random walk. I'm hoping someone can check my understanding of how to generate these random numbers. As I understand it I can use Inverse Transform Sampling as follows: If f(x) is the pdf of our random walk that has a non-uniform distribution, and y is a random number from a uniform distribution. Then if we let f(x) = y and solve to find x then we have a random number from the non-uniform distribution. Is this a feasible solution?

    Read the article

  • DBA's say no to SQL Server DTC?

    - by NabilS
    I am trying to get our DBA's to enable DTC on a cluster of SQL Server 2005. Unfortunately they keep refusing. Their argument that they would need to set up a dedicated host for DTC (Could take months!!) as it is not a matter of ticking a few boxes. Is this true? How intrusive is DTC on a shared environment such as a SQL farm. Do I have an argument against this? Thanks

    Read the article

  • Using Git or Mercurial, how would you know when you do a clone or a pull, no one is checking in file

    - by Jian Lin
    Using Git or Mercurial, how would you know when you do a clone or a pull, no one is checking in files (pushing it)? It can be important that: 1) You never know it is in an inconsistent state, so you try for 2 hours trying to debug the code for what's wrong. 2) With all the framework code -- potentially hundreds of files -- if some files are inconsistent with the other, can't the rake db:migrate or script/generate controller cause some damage or inconsistencies to the code base?

    Read the article

  • How to provide an API client with 1,000,000 database results?

    - by Chris Dutrow
    What is a good way to provide an API client with 1,000,000 database results? We are cureently using PostgreSQL. A few suggested methods: Paging using Cursors Paging using random numbers ( Add "GREATER THAN ORDER BY " to each query ) Save information to a file and let the client download it Iterate through results, then POST the data to the client server Return only keys to the client, then let the client request the objects from Cloud files like Amazon S3 (still may require paging just to get the file names ). What haven't I thought of that is stupidly simple and way better than any of these options?

    Read the article

  • Mesos slave not 'Running' multiple executors simultaneously

    - by user3084164
    I am using Mesos to distribute a bunch of tasks to different machines (mesos-slaves). Here is what happens: 1. My scheduler gets resource offers and accepts it. 2. Mesos stages multiple executors on the same mesos-slaves (each slave has 4 cpus) 3. Only ONE executor enters the 'Running' state on each of the slaves while the others are shown in 'Staging' state. 4. Only after the current executor finishes execution the other executor starts running. Given that I have 4 CPUs on each machine, shouldn't each slave be running 4 executors simultaneously? Each executor requires 1 CPU.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >