Search Results

Search found 908 results on 37 pages for 'optimistic concurrency'.

Page 2/37 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • java concurrency assignments

    - by dev
    I am JEE developer, and I want to get skills on concurrency development. Could you provide me some assignments, ideas, or other - just for learning and training concurrency programming? Thanks!

    Read the article

  • Postfix, limit destination concurrency

    - by Hamlet
    Hi, Website sending email using phpmailer - Centos, postfix, php, mysql etc Emails are getting delivered to all hosts correctly except one. Mar 30 14:38:22 server postfix/qmgr[15467]: 7237D218852D: to=, relay=none, delay=0.04, delays=0.04/0.01/0/0, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mail06.indamail.hu[91.83.45.46] while sending MAIL FROM) They told me to limit concurrent connections to 2 to their server. I did, but they still say I connect more than twice. main.cf: default_destination_concurrency_limit = 1 fallback_relay = smtp_destination_concurrency_limit = 1 initial_destination_concurrency = 1 Any ideas? Thanks, Hamlet

    Read the article

  • postfix concurrency limit with round robin dns

    - by goose
    Take the following internal round robin dns setup mymta.com. IN A 172.31.1.1 mymta.com. IN A 172.31.1.2 mymta.com. IN A 172.31.1.3 mymta.com. IN A 172.31.1.4 mymta.com. IN A 172.31.1.5 mymta.com. IN A 172.31.1.6 mymta.com. IN A 172.31.1.7 mymta.com. IN A 172.31.1.8 mymta.com. IN A 172.31.1.9 mymta.com. IN A 172.31.1.10 Now assume the following postfix setup (assume these are the only tweaks from defaults in debian package) main.cf: smtp_connection_cache_destinations = mymta.com smtp_connection_cache_reuse_limit = 750 smtp_destination_concurrency_limit = 75 transport * :[mymta.com] I would expect 75 concurrent connections spread across the 10 A records I've set in DNS. However I'm seeing more than a few hundred connections to mymta.com and I'm wondering if Postfix is "smart" enough to set up 75 concurrent connections for each IP address. Thoughts?

    Read the article

  • postfix destination concurrency

    - by Hamlet
    Hi, I have a website sending email using phpmailer - Centos, postfix, php, mysql etc Emails are getting delivered to all hosts correctly except one. Mar 30 14:38:22 server postfix/qmgr[15467]: 7237D218852D: to=, relay=none, delay=0.04, delays=0.04/0.01/0/0, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mail06.indamail.hu[91.83.45.46] while sending MAIL FROM) They told me to limit concurrent connections to 2 to their server. I did, but they still say I connect more than twice. main.cf: default_destination_concurrency_limit = 1 fallback_relay = smtp_destination_concurrency_limit = 1 initial_destination_concurrency = 1 Any ideas? Thanks, Hamlet

    Read the article

  • ASP.NET MVC Concurrency with RowVersion in Edit Action

    - by Jorin
    I'm wanting to do a simple edit form for our Issue Tracking app. For simplicity, the HttpGet Edit action looks something like this: // Issues/Edit/12 public ActionResult Edit(int id) { var thisIssue = edmx.Issues.First(i => i.IssueID == id); return View(thisIssue); } and then the HttpPost action looks something like this: [HttpPost] public ActionResult Edit(int id, FormCollection form) { // this is the dumb part where I grab the object before I update it. // concurrency is sidestepped here. var thisIssue = edmx.Issues.Single(c => c.IssueID == id); TryUpdateModel(thisIssue); if (ModelState.IsValid) { edmx.SaveChanges(); TempData["message"] = string.Format("Issue #{0} successfully modified.", id); return RedirectToAction("Index"); } return View(thisIssue); } Which works wonderfully. However, the concurrency check doesn't work because in the Post, I'm re-retreiving the current entity right before I attempt to update it. However, with EF, I don't know how to use the fanciness of SaveChanges() but attach my thisIssue to the context. I tried to call edmx.Issues.Attach(thisIssue) but I get The object cannot be attached because it is already in the object context. An object can only be reattached when it is in an unchanged state. How do I handle concurrency in MVC with EF and/or how do I properly Attach my edited object to the context? Thanks in advance

    Read the article

  • How much configurability to give to users regarding concurrency?

    - by rwong
    This question is a narrowing-down of these related questions: How much effort should we spend to programming for multiple cores? Concurrency: How do you approach the design and debug the implementation? Given that each user's computers may have different performance characteristics with respect to calculations, memory, disk I/O bandwidth and network I/O bandwidth, and that it is difficult to implement an automated self-tuning system in your software, how much configurability should we give to the end-users so that they can find ways (by trial-and-error?) to improve our software's efficiency? If we give users the ability to change these settings, how do we give visual feedback to users so they can measure the performance changes?

    Read the article

  • Simple non-network concurrency with Twisted

    - by Rince
    Dear pythoners, I have a problem with using Twisted for simple concurrency in python. The problem is - I don't know how to do it and all online resources are about Twisted networking abilities. So I am turning to SO-gurus for some guidance. Python 2.5 is used. Simplified version of my problem runs as follows: A bunch of scientific data A function that munches on the data and creates output ??? < here enters concurrency, it takes chunks of data from 1 and feeds it to 2 Output from 3 is joined and stored My guess is that Twisted reactor can do the number three job. But how? Thanks a lot for any help and suggestions.

    Read the article

  • C#, Manage concurrency in database access

    - by Goul
    Hi there, I have written a while ago an application used by multiple users to handle trades creation. I haven't done development for some time now and can't remember how I managed the concurrency between the users and so would have liked your advices in term of design. The application was as follow: - One heavy client per user - A single database - Access to the database for each user to insert/update/delete trades - A grid in the application reflecting the trades table. That grid being updated each time someone changes a deal. My questions: 1- Do you confirm I shouldn't care about the connection to the database for each application. Considering that there is a singleton in each, I would expect on connexion per client with no issue. 2- How preventing the concurrency of the accesses? I guess I should lock when modifying the data, however don't remember how to. 3- How to have the grid automatically updated whenever the database is (by another user for example)? Thank you in advance for your help!

    Read the article

  • C# vs Java Concurrency

    - by Lirik
    What are some notable differences between C# and Java concurrency? Are there any fundamental differences that we should know about? What should developers consider when trying to pick one or the other? I based my question on this one, but I'm more interested in the fundamental differences... not which one is better.

    Read the article

  • Global Temporary Table Concurrency

    - by sahs
    Hi, I have a global temp table which is set as delete on commit. How does it behave on concurrency issue? I mean what happens if another session wants to use that global temporary table? The answer will probably not be "they share the same data". Now, if my guess is correct :), is the table locked until the first connection commits, or does the dbms create a global temp table for each connection? ( something like an instance of the table? )

    Read the article

  • .Net concurrency performance on client side

    - by Yaron Naveh
    I am writing a client side .Net application which is expected to use a lot of threads. I was warned that .Net performance is very bad when it comes to concurrency. While I am not writing a real-time application, I want to make sure my application is scalable (i.e. allows many threads) and somehow comparable to an equivalent cpp application. Anyone can share his experience? Anyone can refer me to a relevant benchmark?

    Read the article

  • java concurrency: many writers, one reader

    - by Janning
    I need to gather some statistics in my software and i am trying to make it fast and correct, which is not easy (for me!) first my code so far with two classes, a StatsService and a StatsHarvester public class StatsService { private Map<String, Long> stats = new HashMap<String, Long>(1000); public void notify ( String key ) { Long value = 1l; synchronized (stats) { if (stats.containsKey(key)) { value = stats.get(key) + 1; } stats.put(key, value); } } public Map<String, Long> getStats ( ) { Map<String, Long> copy; synchronized (stats) { copy = new HashMap<String, Long>(stats); stats.clear(); } return copy; } } this is my second class, a harvester which collects the stats from time to time and writes them to a database. public class StatsHarvester implements Runnable { private StatsService statsService; private Thread t; public void init ( ) { t = new Thread(this); t.start(); } public synchronized void run ( ) { while (true) { try { wait(5 * 60 * 1000); // 5 minutes collectAndSave(); } catch (InterruptedException e) { e.printStackTrace(); } } } private void collectAndSave ( ) { Map<String, Long> stats = statsService.getStats(); // do something like: // saveRecords(stats); } } At runtime it will have about 30 concurrent running threads each calling notify(key) about 100 times. Only one StatsHarvester is calling statsService.getStats() So i have many writers and only one reader. it would be nice to have accurate stats but i don't care if some records are lost on high concurrency. The reader should run every 5 Minutes or whatever is reasonable. Writing should be as fast as possible. Reading should be fast but if it locks for about 300ms every 5 minutes, its fine. I've read many docs (Java concurrency in practice, effective java and so on), but i have the strong feeling that i need your advice to get it right. I hope i stated my problem clear and short enough to get valuable help.

    Read the article

  • Auto DOP and Concurrency

    - by jean-pierre.dijcks
    After spending some time in the cloud, I figured it is time to come down to earth and start discussing some of the new Auto DOP features some more. As Database Machines (the v2 machine runs Oracle Database 11.2) are effectively selling like hotcakes, it makes some sense to talk about the new parallel features in more detail. For basic understanding make sure you have read the initial post. The focus there is on Auto DOP and queuing, which is to some extend the focus here. But now I want to discuss the concurrency a little and explain some of the relevant parameters and their impact, specifically in a situation with concurrency on the system. The goal of Auto DOP The idea behind calculating the Automatic Degree of Parallelism is to find the highest possible DOP (ideal DOP) that still scales. In other words, if we were to increase the DOP even more  above a certain DOP we would see a tailing off of the performance curve and the resource cost / performance would become less optimal. Therefore the ideal DOP is the best resource/performance point for that statement. The goal of Queuing On a normal production system we should see statements running concurrently. On a Database Machine we typically see high concurrency rates, so we need to find a way to deal with both high DOP’s and high concurrency. Queuing is intended to make sure we Don’t throttle down a DOP because other statements are running on the system Stay within the physical limits of a system’s processing power Instead of making statements go at a lower DOP we queue them to make sure they will get all the resources they want to run efficiently without trashing the system. The theory – and hopefully – practice is that by giving a statement the optimal DOP the sum of all statements runs faster with queuing than without queuing. Increasing the Number of Potential Parallel Statements To determine how many statements we will consider running in parallel a single parameter should be looked at. That parameter is called PARALLEL_MIN_TIME_THRESHOLD. The default value is set to 10 seconds. So far there is nothing new here…, but do realize that anything serial (e.g. that stays under the threshold) goes straight into processing as is not considered in the rest of this post. Now, if you have a system where you have two groups of queries, serial short running and potentially parallel long running ones, you may want to worry only about the long running ones with this parallel statement threshold. As an example, lets assume the short running stuff runs on average between 1 and 15 seconds in serial (and the business is quite happy with that). The long running stuff is in the realm of 1 – 5 minutes. It might be a good choice to set the threshold to somewhere north of 30 seconds. That way the short running queries all run serial as they do today (if it ain’t broken, don’t fix it) and allows the long running ones to be evaluated for (higher degrees of) parallelism. This makes sense because the longer running ones are (at least in theory) more interesting to unleash a parallel processing model on and the benefits of running these in parallel are much more significant (again, that is mostly the case). Setting a Maximum DOP for a Statement Now that you know how to control how many of your statements are considered to run in parallel, lets talk about the specific degree of any given statement that will be evaluated. As the initial post describes this is controlled by PARALLEL_DEGREE_LIMIT. This parameter controls the degree on the entire cluster and by default it is CPU (meaning it equals Default DOP). For the sake of an example, let’s say our Default DOP is 32. Looking at our 5 minute queries from the previous paragraph, the limit to 32 means that none of the statements that are evaluated for Auto DOP ever runs at more than DOP of 32. Concurrently Running a High DOP A basic assumption about running high DOP statements at high concurrency is that you at some point in time (and this is true on any parallel processing platform!) will run into a resource limitation. And yes, you can then buy more hardware (e.g. expand the Database Machine in Oracle’s case), but that is not the point of this post… The goal is to find a balance between the highest possible DOP for each statement and the number of statements running concurrently, but with an emphasis on running each statement at that highest efficiency DOP. The PARALLEL_SERVER_TARGET parameter is the all important concurrency slider here. Setting this parameter to a higher number means more statements get to run at their maximum parallel degree before queuing kicks in.  PARALLEL_SERVER_TARGET is set per instance (so needs to be set to the same value on all 8 nodes in a full rack Database Machine). Just as a side note, this parameter is set in processes, not in DOP, which equates to 4* Default DOP (2 processes for a DOP, default value is 2 * Default DOP, hence a default of 4 * Default DOP). Let’s say we have PARALLEL_SERVER_TARGET set to 128. With our limit set to 32 (the default) we are able to run 4 statements concurrently at the highest DOP possible on this system before we start queuing. If these 4 statements are running, any next statement will be queued. To run a system at high concurrency the PARALLEL_SERVER_TARGET should be raised from its default to be much closer (start with 60% or so) to PARALLEL_MAX_SERVERS. By using both PARALLEL_SERVER_TARGET and PARALLEL_DEGREE_LIMIT you can control easily how many statements run concurrently at good DOPs without excessive queuing. Because each workload is a little different, it makes sense to plan ahead and look at these parameters and set these based on your requirements.

    Read the article

  • Java Concurrency: CAS vs Locking

    - by Hugo Walker
    Im currently reading the Book Java Concurrency in Practice. In the Chapter 15 they are speaking about the Nonblocking algorithms and the compare-and-swap (CAS) Method. It is written that the CAS perform much better than the Locking Methods. I want to ask the people which already worked with both of this concepts and would like to hear when you are preferring which of these concept? Is it really so much faster? Personally for me the usage of Locks is much clearer and easier to understand and maybe even better to maintain. (Please correct me if I am wrong). Should we really focus creating our concurrent code related on CAS than Locks to get a better performance boost or is sustainability a higher thing? I know there is maybe not a strict rule, when to use what. But I just would like to hear some opinions, experiences with the new concept of CAS.

    Read the article

  • concurrency in Java

    - by p1
    1]What is Non-blocking Concurrency and how is it different. 2] I have heard that this is available in Java. Are there any particular scenarios we should use this feature 3] Is there a difference/advantage of using one of these methods for a collection. What are the trade offs class List { private final ArrayList<String> list = new ArrayList<String>(); void add(String newValue) { synchronized (list) { list.add(newValue); } } } OR private final ArrayList<String> list = Collections.synchronizedMap(); The questions are more from a learning/understanding point of view. Thanks for attention.

    Read the article

  • How to handle data concurrency in ASP.NET?

    - by Itsgkiran
    Hi! all I have an application, that is accessing by number of users at the same time. Those users, who are accessing the application getting the same id. Here what i am doing in the code is, when they are creating new user i am getting a max id from DB and increasing the value to 1. So that they are getting same ID. so that i am facing concurrency in this situation. How to solve this problem. I need to display different numbers when the users click on NewUser. I am using SQL server 2008 and .NET 3.5 and C#.NET Thanks in advance.

    Read the article

  • How to handle concurrency control in dynamic data?

    - by Andrew
    I've been quite impressed with dynamic data and how easy and quick it is to get a simple site up and running. I'm planning on using it for a simple internal HR admin site for registering people's skills/degrees/etc. I've been watching the intro videos at www.asp.net/dynamicdata and one thing they never mention is how to handle concurrency control. It seems that DD does not handle it right out of the box (unless there is some setting I haven't seen) as I manually generated a change conflict exception and the app failed without any user friendly message. Anybody know if DD handles it out of the box? Or do you have to somehow build it into the site?

    Read the article

  • Understanding Clojure concurrency example

    - by dusha
    Hello, I just go through various documentation on Clojure concurrency and came accross the example on the website (http://clojure.org/concurrent_programming). (import '(java.util.concurrent Executors)) (defn test-stm [nitems nthreads niters] (let [refs (map ref (replicate nitems 0)) pool (Executors/newFixedThreadPool nthreads) tasks (map (fn [t] (fn [] (dotimes [n niters] (dosync (doseq [r refs] (alter r + 1 t)))))) (range nthreads))] (doseq [future (.invokeAll pool tasks)] (.get future)) (.shutdown pool) (map deref refs))) I understand what it does and how it works, but I don't get why the second anonymous function fn[] is needed? Many thanks, dusha. P.S. Without this second fn [] I get NullPointerException.

    Read the article

  • Does concurrency inherently introduce "randomness" into a game?

    - by Jeff
    When a game is implemented with concurrency (as most games are), does this necessarily, by its very nature, introduce an element of randomness into the game that is outside of the players' control? Note that when I use the word "random", I'm not meaning to launch into a philosophical debate about the deterministic nature of the system. I understand that concurrency is deterministic in the sense that the operating system decides which processes to allow time on the CPU and in what order (or the JVM controls which Thread's turn it is to execute, etc). But my understanding of this is that there is no way to control or predict whether one thread's next command will execute before or after another. The reason I'm asking is because this seems like a fundamental difficulty for game development where a game is supposedly designed around a player's skill. Consider a game like League of Legends. Assume that two players are battling it out. It's a very close contest between the two and it's coming down to the wire -- so much so that whoever gets their last attack off will be the one to kill the other and win the game for their team. If the players are implemented using concurrency and the situation really was like this, is it essentially out of the players' hands at this point? Is the outcome of this match all up to whatever system is arbitrarily deciding which player's thread/process will execute next? If not, what am I misunderstanding about concurrency? If so, is there any way around this problem so that a game of skill can always be a game of skill, especially in those most crucial moments?

    Read the article

  • Java Concurrency : Synchronized(this) => and this.wait() and this.notify()

    - by jens
    Hello Experts, I would appreciate your help in understand a "Concurrency Example" from: http://forums.sun.com/thread.jspa?threadID=735386 Qute Start: public synchronized void enqueue(T obj) { // do addition to internal list and then... this.notify(); } public synchronized T dequeue() { while (this.size()==0) { this.wait(); } return // something from the queue } Quote End: My Question is: Why is this code valid? = When I synchronize a method like "public synchronized" = then I synchronize on the "Instance of the Object == this". However in the example above: Calling "dequeue" I will get the "lock/monitor" on this Now I am in the dequeue method. As the list is zero, the calling thread will be "waited" From my understanding I have now a deadlock situation, as I will have no chance of ever enquing an object (from an nother thread), as the "dequeue" method is not yet finised and the dequeue "method" holds the lock on this: So I will never ever get the possibility to call "enequeue" as I will not get the "this" lock. Backround: I have exactly the same problem: I have some kind of connection pool (List of Connections) and need to block if all connections are checked. What is the correct way to synchronize the List to block, if size exceeds a limit or is zero? Thank you very much Jens

    Read the article

  • Concurrency and Coordination Runtime (CCR) Learning Resources

    - by Harry
    I have recently been learning the in's and out's of the Concurrency and Coordination Runtime (CCR). Finding good learning resources for this relatively new technology has been quite difficult. (A quick google search brings up "Creedence Clearwater Revival" as the top result!) Some of the resources I have found: Free e-book chapter from WROX on the Robotics Developer Studio Good Article/post on InfoQ Robotic's Member blog Very active MSDN CCR Forum - Got plenty of help from here! Great MSDN Magazine by Jeffrey Richter Official CCR User Guide - Didn't find this very helpful Great blogging series on CCR iodyner CCR Related Blog - Update: Moved to here Eight or so Videos on Channel9.msdn.com CCR Patterns page on MS Robotics Studio - I haven't read this yet 4 x CCR Questions on Stackoverflow - Most of the questions have been Mine! LOL CCR and DSS toolkit has now been released to MSDN Members Do you have any good learning resources for the CCR? I really hope that Microsoft will publish more material, so far it has been too Robotics specific. I believe that MS needs to acknowledge that most people are using the CCR in issolation from the DSS and Robotics Studio. Update The Mix 2010 conference had a presentation by Myspace about how they have used the CCR framework in their middle tier. They also open sourced the code base. MySpace DataRelay Mix Video Presentation

    Read the article

  • Java Concurrency in practice sample question

    - by andy boot
    I am reading "Java Concurrency in practice" and looking at the example code on page 51. According to the book this piece of code is at risk of of failure if it has not been published properly. Because I like to code examples and break them to prove how they work. I have tried to make it throw an AssertionError but have failed. (Leading me to my previous question) Can anyone post sample code so that an AssertionError is thrown? Rule: Do not modify the Holder class. public class Holder{ private int n; public Holder(int n){ this.n = n; } public void assertSanity(){ if (n != n) { throw new AssertionError("This statement is false"); } } } I have modified the class to make it more fragile but I still can not get an AssertionError thrown. class Holder2{ private int n; private int n2; public Holder2(int n) throws InterruptedException{ this.n = n; Thread.sleep(200); this.n2 = n; } public void assertSanity(){ if (n != n2) { throw new AssertionError("This statement is false"); } } } Is it possible to make either of the above classes throw an AssertionError? Or do we have to accept that they may occasionally do so and we can't write code to prove it?

    Read the article

  • Java Concurrency in practice sample question

    - by andy boot
    I am reading "Java Concurrency in practice" and looking at the example code on page 51. This states that if a thread has references to a shared object then other threads may be able to access that object before the constructor has finished executing. I have tried to put this into practice and so I wrote this code thinking that if I ran it enough times a RuntimeException("World is f*cked") would occur. But it isn't doing. Is this a case of the Java spec not guaranting something but my particular implementation of java guaranteeing it for me? (java version: 1.5.0 on Ubuntu) Or have I misread something in the book? Code: (I expect an exception but it is never thrown) public class Threads { private Widgit w; public static void main(String[] s) throws Exception { while(true){ Threads t = new Threads(); t.runThreads(); } } private void runThreads() throws Exception{ new Checker().start(); w = new Widgit((int)(Math.random() * 100) + 1); } private class Checker extends Thread{ private static final int LOOP_TIMES = 1000; public void run() { int count = 0; for(int i = 0; i < LOOP_TIMES; i++){ try { w.checkMe(); count++; } catch(NullPointerException npe){ //ignore } } System.out.println("checked: "+count+" times out of "+LOOP_TIMES); } } private static class Widgit{ private int n; private int n2; Widgit(int n) throws InterruptedException{ this.n = n; Thread.sleep(2); this.n2 = n; } void checkMe(){ if (n != n2) { throw new RuntimeException("World is f*cked"); } } } }

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >