Search Results

Search found 908 results on 37 pages for 'optimistic concurrency'.

Page 3/37 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Concurrency problem with arrays (Java)

    - by Johannes
    For an algorithm I'm working on I tried to develop a blacklisting mechanism that can blacklist arrays in a specific way: If "1, 2, 3" is blacklisted "1, 2, 3, 4, 5" is also considered blacklisted. I'm quite happy with the solution I've come up with so far. But there seem to be some serious problems when I access a blacklist from multiple threads. The method "contains" (see code below) sometimes returns true, even if an array is not blacklisted. This problem does not occur if I only use one thread, so it most likely is a concurrency problem. I've tried adding some synchronization, but it didn't change anything. I also tried some slightly different implementations using java.util.concurrent classes. Any ideas on how to fix this? public class Blacklist { private static final int ARRAY_GROWTH = 10; private final Node root = new Node(); private static class Node{ private volatile Node[] childNodes = new Node[ARRAY_GROWTH]; private volatile boolean blacklisted = false; public void blacklist(){ this.blacklisted = true; this.childNodes = null; } } public void add(final int[] array){ synchronized (root) { Node currentNode = this.root; for(final int edge : array){ if(currentNode.blacklisted) return; else if(currentNode.childNodes.length <= edge) { currentNode.childNodes = Arrays.copyOf(currentNode.childNodes, edge + ARRAY_GROWTH); } if(currentNode.childNodes[edge] == null) { currentNode.childNodes[edge] = new Node(); } currentNode = currentNode.childNodes[edge]; } currentNode.blacklist(); } } public boolean contains(final int[] array){ synchronized (root) { Node currentNode = this.root; for(final int edge : array){ if(currentNode.blacklisted) return true; else if(currentNode.childNodes.length <= edge || currentNode.childNodes[edge] == null) return false; currentNode = currentNode.childNodes[edge]; } return currentNode.blacklisted; } } }

    Read the article

  • What are the best resources for learning about concurrency and multi-threaded applications?

    - by Zepee
    I realised I have a massive knowledge gap when it comes to multi-threaded applications and concurrent programming. I've covered some basics in the past, but most of it seems to be gone from my mind, and it is definitely a field that I want, and need, to be more knowledgeable about. What are the best resources for learning about building concurrent applications? I'm a very practical oriented person, so if said book contains concrete examples the better, but I'm open to suggestions. I personally prefer to work in pseudocode or C++, and a slant toward game development would be best, but not required.

    Read the article

  • Concurrency with listeners and upload: is this solution sound?

    - by cbrulak
    I'm working on an Android app. We have listeners for position data and camera capture which is timed on a background thread (not a service). The timer thread dictates when the image is captured and also when the data is cached in a local sqlite db. So, my question is how to properly store the location data as it comes in based on the listeners and pull that data so that the database can be updated as the camera capture is executed. I can't put the location data into the database as it arrives because it is processed more frequently than the camera images and the camera images are what dominates the architecture at the moment. (Location data is supplement). However I need the location data to get a rough location of the where the image is captured. My first thought was to have singleton store the location data. And have a semaphore on the getInstance method so if the database update is happening (after an image is captured) we don't have an error. The location data can wait for the database update or it can be lost for that particular event it doesn't really matter. What are you thoughts? Am I on the right track? (And is this the right sub-site or would this be better on stackoverflow?)

    Read the article

  • Concurrency Utilities for Java EE 6: JSR 236 Rebooting

    - by arungupta
    JSR 166 added support for concurrency utilities in the Java platform. The JSR 236's, a.k.a Concurrency Utilities for Java EE, goal was to extend that support to the Java EE platform by adding asynchronous abilities to different application components. The EG was however stagnant since Dec 2003. Its coming back to life with the co-spec lead Anthony Lai's message to the JSR 236 EG (archived here). The JSR will be operating under JCP 2.8's transparency rules and can be tracked at concurrency-spec.java.net. All the mailing lists are archived here. The final release is expected in Q1 2013 and the APIs will live in the javax.enterprise.concurrent package. Please submit your nomination if you would like to join this EG.

    Read the article

  • Self-updating collection concurrency issues

    - by DEHAAS
    I am trying to build a self-updating collection. Each item in the collection has a position (x,y). When the position is changed, an event is fired, and the collection will relocate the item. Internally the collection is using a “jagged dictionary”. The outer dictionary uses the x-coordinate a key, while the nested dictionary uses the y-coordinate a key. The nested dictionary then has a list of items as value. The collection also maintains a dictionary to store the items position as stored in the nested dictionaries – item to stored location lookup. I am having some trouble making the collection thread safe, which I really need. Source code for the collection: public class PositionCollection<TItem, TCoordinate> : ICollection<TItem> where TItem : IPositionable<TCoordinate> where TCoordinate : struct, IConvertible { private readonly object itemsLock = new object(); private readonly Dictionary<TCoordinate, Dictionary<TCoordinate, List<TItem>>> items; private readonly Dictionary<TItem, Vector<TCoordinate>> storedPositionLookup; public PositionCollection() { this.items = new Dictionary<TCoordinate, Dictionary<TCoordinate, List<TItem>>>(); this.storedPositionLookup = new Dictionary<TItem, Vector<TCoordinate>>(); } public void Add(TItem item) { if (item.Position == null) { throw new ArgumentException("Item must have a valid position."); } lock (this.itemsLock) { if (!this.items.ContainsKey(item.Position.X)) { this.items.Add(item.Position.X, new Dictionary<TCoordinate, List<TItem>>()); } Dictionary<TCoordinate, List<TItem>> xRow = this.items[item.Position.X]; if (!xRow.ContainsKey(item.Position.Y)) { xRow.Add(item.Position.Y, new List<TItem>()); } xRow[item.Position.Y].Add(item); if (this.storedPositionLookup.ContainsKey(item)) { this.storedPositionLookup[item] = new Vector<TCoordinate>(item.Position); } else { this.storedPositionLookup.Add(item, new Vector<TCoordinate>(item.Position)); // Store a copy of the original position } item.Position.PropertyChanged += (object sender, PropertyChangedEventArgs eventArgs) => this.UpdatePosition(item, eventArgs.PropertyName); } } private void UpdatePosition(TItem item, string propertyName) { lock (this.itemsLock) { Vector<TCoordinate> storedPosition = this.storedPositionLookup[item]; this.RemoveAt(storedPosition, item); this.storedPositionLookup.Remove(item); } } } I have written a simple unit test to check for concurrency issues: [TestMethod] public void TestThreadedPositionChange() { PositionCollection<Crate, int> collection = new PositionCollection<Crate, int>(); Crate crate = new Crate(new Vector<int>(5, 5)); collection.Add(crate); Parallel.For(0, 100, new Action<int>((i) => crate.Position.X += 1)); Crate same = collection[105, 5].First(); Assert.AreEqual(crate, same); } The actual stored position varies every time I run the test. I appreciate any feedback you may have.

    Read the article

  • Techniques for modeling a dynamic dataflow with Java concurrency API

    - by Maian
    Is there an elegant way to model a dynamic dataflow in Java? By dataflow, I mean there are various types of tasks, and these tasks can be "connected" arbitrarily, such that when a task finishes, successor tasks are executed in parallel using the finished tasks output as input, or when multiple tasks finish, their output is aggregated in a successor task (see flow-based programming). By dynamic, I mean that the type and number of successors tasks when a task finishes depends on the output of that finished task, so for example, task A may spawn task B if it has a certain output, but may spawn task C if has a different output. Another way of putting it is that each task (or set of tasks) is responsible for determining what the next tasks are. Sample dataflow for rendering a webpage: I have as task types: file downloader, HTML/CSS renderer, HTML parser/DOM builder, image renderer, JavaScript parser, JavaScript interpreter. File downloader task for HTML file HTML parser/DOM builder task File downloader task for each embedded file/link If image, image renderer If external JavaScript, JavaScript parser JavaScript interpreter Otherwise, just store in some var/field in HTML parser task JavaScript parser for each embedded script JavaScript interpreter Wait for above tasks to finish, then HTML/CSS renderer (obviously not optimal or perfectly correct, but this is simple) I'm not saying the solution needs to be some comprehensive framework (in fact, the closer to the JDK API, the better), and I absolutely don't want something as heavyweight is say Spring Web Flow or some declarative markup or other DSL. To be more specific, I'm trying to think of a good way to model this in Java with Callables, Executors, ExecutorCompletionServices, and perhaps various synchronizer classes (like Semaphore or CountDownLatch). There are a couple use cases and requirements: Don't make any assumptions on what executor(s) the tasks will run on. In fact, to simplify, just assume there's only one executor. It can be a fixed thread pool executor, so a naive implementation can result in deadlocks (e.g. imagine a task that submits another task and then blocks until that subtask is finished, and now imagine several of these tasks using up all the threads). To simplify, assume that the data is not streamed between tasks (task output-succeeding task input) - the finishing task and succeeding task won't exist together, so the input data to the succeeding task will not be changed by the preceeding task (since it's already done). There are only a couple operations that the dataflow "engine" should be able to handle: A mechanism where a task can queue more tasks A mechanism whereby a successor task is not queued until all the required input tasks are finished A mechanism whereby the main thread (or other threads not managed by the executor) blocks until the flow is finished A mechanism whereby the main thread (or other threads not managed by the executor) blocks until certain tasks have finished Since the dataflow is dynamic (depends on input/state of the task), the activation of these mechanisms should occur within the task code, e.g. the code in a Callable is itself responsible for queueing more Callables. The dataflow "internals" should not be exposed to the tasks (Callables) themselves - only the operations listed above should be available to the task. Note that the type of the data is not necessarily the same for all tasks, e.g. a file download task may accept a File as input but will output a String. If a task throws an uncaught exception (indicating some fatal error requiring all dataflow processing to stop), it must propagate up to the thread that initiated the dataflow as quickly as possible and cancel all tasks (or something fancier like a fatal error handler). Tasks should be launched as soon as possible. This along with the previous requirement should preclude simple Future polling + Thread.sleep(). As a bonus, I would like to dataflow engine itself to perform some action (like logging) every time task is finished or when no has finished in X time since last task has finished. Something like: ExecutorCompletionService<T> ecs; while (hasTasks()) { Future<T> future = ecs.poll(1 minute); some_action_like_logging(); if (future != null) { future.get() ... } ... } Are there straightforward ways to do all this with Java concurrency API? Or if it's going to complex no matter what with what's available in the JDK, is there a lightweight library that satisfies the requirements? I already have a partial solution that fits my particular use case (it cheats in a way, since I'm using two executors, and just so you know, it's not related at all to the web browser example I gave above), but I'd like to see a more general purpose and elegant solution.

    Read the article

  • Optimistic non-locking copy of InnoDB .frm files

    - by jothir
    MySQL Enterprise Backup(MEB) does hot backup of innodb data and log files. Till MEB 3.6.1, the user backs up the only innodb tables in a 3 step process: STEP 1. Take backup using --only-innodb option STEP 2. Temporarily make the table read only by executing “FLUSH TABLES WITH READ LOCK” MEB 3.7.0 has an enhancement to innodb file copying. The .frm files gets copied along with the hot backup done for innodb files. I would like to make the blog a little interactive by explaining the feature as answers: 1. What are these .frm files? The files containing the metadata, such as the table definition, of a MySQL table. For backups, the full set of .frm files are always required along with the backup data, to be able to restore tables that are altered or dropped after the backup. 2. Can the .frm files not be copied by MEB itself? --only-innodb-with-frm is the new option introduced in MEB 3.7.1 to do a copy of .frm files without locking the tables during backup operation itself. This is to reduce the pain of manually copying the .frm files. The option is intended for backups where you can ensure that no ALTER TABLE, CREATE TABLE, DROP TABLE, or other DDL statements modify the .frm files for InnoDB tables during the backup operation. 3. How is data consistency ensured? MEB does validation of the .frm files after copying by comparing with the server directory to see if the timestamps of any of the .frm files is greater than the saved system time (check .frm time).  This change in timestamp of the .frm files will show if a table is altered during the process of backup. The total number of frm files in the server directory is also verified against the copied contents. If the number of .frm files is less compared to server directory, it shows that table/tables have been dropped during the process of backup. If the number of .frm files is more compared to server directory, it shows that new table/tables have been created during backup operation. 4. How does MEB handle data inconsistency? MEB copies the .frm files through several iterations,  does the validation and throws a WARNING if there is any inconsistency found in .frm files at the end of backup operation. This means the user is warned of some DDL operations that had occurred during backup operation, and has to manually copy the .frm files or do a backup again. 5. What is the option and explain its usage? The option introduced is --only-innodb-with-frm which does optimistic copy of .frm files without locking. This can be used when the user wants to backup only innodb tables along with .frm files. The option can take one of the 2 values: all | related. --only-innodb-with-frm=all does copy of all .frm files of all innodb tables. --only-innodb-with-frm=related works in conjunction with --include option.This is to allow partial backup of .frm files corresponding to the tables specified in --include. Let me show the usage with example output: ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll --only-innodb-with-frm=all backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll        --only-innodb-with-frm=all backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          =  /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb/  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrmAll/datadir  innodb_data_home_dir             =  /logs/backupWithFrmAll/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrmAll/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451979804504860 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:36:22 mysqlbackup: INFO: Copying log... 120817 15:36:22 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/ibdata1 (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table2.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table3.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table1.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb/' 120817 15:36:23 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb/ mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:36:25 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrmAll' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrmAll/datadir/innodb1/ table1.frm  table1.ibd  table2.frm  table2.ibd  table3.frm  table3.ibd Here the backup directory contains all the .frm files of all the innodb tables. ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrm --include="innodb1.table3.*" --only-innodb-with-frm=related backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backup371frm        --include=innodb1.table3.* --only-innodb-with-frm=related backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          = /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrm/datadir  innodb_data_home_dir             =  /logs/backupWithFrm/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrm/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451973458118162 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: The --include option specified: innodb1.table3.* mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:25:47 mysqlbackup: INFO: Copying log... 120817 15:25:47 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:25:48 mysqlbackup: INFO: Copying /mysql/trydbibdata1 (Antelope file format). 120817 15:25:49 mysqlbackup: INFO: Copying /mysql/trydbinnodb1/table3.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb' 120817 15:25:49 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:25:51 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrm' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrm/datadir/innodb1/ table3.frm table3.ibd Thus the backup directory contains only the .frm file matching the innodb table name specified in --include option. In a nutshell, we present our great new option --only-innodb-with-frm which is a true hot InnoDB-only backup with .frm files, but with an additional check, if any DDL happened during the backup. If a DDL has happened, the DBA can decide if to repeat the backup, or to live with the potential inconsistency. This is the ideal solution for users that have all their "real" data in InnoDB and seldom change their schemas. You may also like: http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/backup-partial-options.html   STEP 3. Manually copy the .frm files of innodb tables to the destination directory where backup is stored.

    Read the article

  • Troubleshoot Database Concurrency in SQL Server with sp_locks

    sp_locks is a useful tool which can help a DBA in detecting and troubleshooting blocking and concurrency scenarios. This article demonstrates a worked example of using sp_locks to troubleshoot a database concurrency issue. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • Self hosted WCF ServiceHost/WebServiceHost Concurrency/Peformance Design Options (.NET 3.5)

    - by Kyle
    So I'll be providing a few functions via a self hosted (in a WindowsService) WebServiceHost (not sure how to process HTTP GET/POST with ServiceHost), one of which may be called a large amount of the time. This function will also rely on a connection in the appdomain (hosted by the WindowsService so it can stay alive over multiple requests). I have the following concerns and would be oh so thankful for any input/thoughts/comments: Concurrent access - how does the WebServiceHost handle a bunch of concurrent requests. Are they queued and processes sequentially or are new instances of the contracts automagically created? WebServiceHost - WindowsService communication - I need some form of communication from the WebServiceHost to the hosting WindowsService for things like requesting a new session if one does not exist. Perhaps implementing a class which extends the WebServiceHost with events which the WindowsService subscribes to... (unless there is another way I can set off an event in the WindowsService when a request is made...) Multiple WebServiceHosts or Contracts - Would it give any real performance gain to be running multiple WebServiceHost instances in different threads (one per endpoint perhaps?) - A better understanding of the first point would probably help here. WSDL - I'm not sure why (probably just need to do more reading), but I'm not sure how to get the WebServiceHost base endpoint to respond with a WDSL document describing the available contract. Not required as all the operations will be done via GET requests which will not likely change, but it would be nice to have... That's about it for the moment ;) I've been reading a lot on WCF and wish I'd gotten into it long ago, but definitely still learning.

    Read the article

  • Http requests / concurrency?

    - by maxp
    Say a website on my localhost takes about 3 seconds to do each request. This is fine, and as expected (as it is doing some fancy networking behind the scenes). However, if i open the same url in tabs (in firefox), then reload them all at the same time, it appears to load each page sequentially rather than all at the same time. What is this all about? Have tried it on windows server 2008 iis and windows 7 iis

    Read the article

  • BasicHTTPServer, SimpleHTTPServer and concurrency

    - by braindump
    I'm writing a small web server for testing purposes using python, BasicHTTPServer and SimpleHTTPServer. It looks like it's processing one request at a time. Is there any way to make it a little faster without messing around too deeply? Basicly my code looks as the following and I'd like to keep it this simple ;) os.chdir(webroot) httpd = BaseHTTPServer.HTTPServer(("", port), SimpleHTTPServer.SimpleHTTPRequestHandler) print("Serving directory %s on port %i" %(webroot, port) ) try: httpd.serve_forever() except KeyboardInterrupt: print("Server stopped.")

    Read the article

  • How do I get the java.concurrency.CyclicBarrier to work as expected

    - by Ritesh M Nayak
    I am writing code that will spawn two thread and then wait for them to sync up using the CyclicBarrier class. Problem is that the cyclic barrier isn't working as expected and the main thread doesnt wait for the individual threads to finish. Here's how my code looks: class mythread extends Thread{ CyclicBarrier barrier; public mythread(CyclicBarrier barrier) { this.barrier = barrier; } public void run(){ barrier.await(); } } class MainClass{ public void spawnAndWait(){ CyclicBarrier barrier = new CyclicBarrier(2); mythread thread1 = new mythread(barrier).start(); mythread thread2 = new mythread(barrier).start(); System.out.println("Should wait till both threads finish executing before printing this"); } } Any idea what I am doing wrong? Or is there a better way to write these barrier synchronization methods? Please help.

    Read the article

  • LINQ to SQL: Issue with concurrency

    - by Gib
    I’m working on a sandwich ordering app in ASP.NET MVC, C# and LINQ to SQL. The app revolves around the user creating multiple custom-made sandwiches from a selection of ingredients. When it comes to confirming the order I need to know that there’s enough portions of each ingredient to fulfil all the sandwiches in the user’s order before I commit to the DB as it is possible that an ingredient will go out of stock between adding it to their basket and confirming the order. A bit about the database: Ingredient – Stores ingredient details including number of portions Order – Header table for an order, simply stores the order time OrderDetail – Stores a record of each sandwich in an order OrderDetailItem – Stores each ingredient in each sandwich in an order So basically I’m wondering what the best approach to ensuring that before I add records to Order, OrderDetail and OrderDetailItem I can ensure that the order can be met.

    Read the article

  • Java Concurrency : Volatile vs final in "cascaded" variables?

    - by Tom
    Hello Experts, is final Map<Integer,Map<String,Integer>> status = new ConcurrentHashMap<Integer, Map<String,Integer>>(); Map<Integer,Map<String,Integer>> statusInner = new ConcurrentHashMap<Integer, Map<String,Integer>>(); status.put(key,statusInner); the same as volatile Map<Integer,Map<String,Integer>> status = new ConcurrentHashMap<Integer, Map<String,Integer>>(); Map<Integer,Map<String,Integer>> statusInner = new ConcurrentHashMap<Integer, Map<String,Integer>>(); status.put(key,statusInner); in case the inner Map is accessed by different Threads? or is even something like this required: volatile Map<Integer,Map<String,Integer>> status = new ConcurrentHashMap<Integer, Map<String,Integer>>(); volatile Map<Integer,Map<String,Integer>> statusInner = new ConcurrentHashMap<Integer, Map<String,Integer>>(); status.put(key,statusInner); In case the it is NOT a "cascaded" map, final and volatile have in the end the same effect of making shure that all threads see always the correct contents of the Map... But what happens if the Map iteself contains a map, as in the example... How do I make shure that the inner Map is correctly "Memory barriered"? Tanks! Tom

    Read the article

  • Control process concurrency in PHP

    - by Ivan
    I have programmed a simple app that every X minutes checks if an image has changed in several websites and downloads it. It's very simple: downloads image header, make some CRC checks, downloads the file, stores in a MySQL database some data about each image and process next item... This process takes about 1 minute to complete. The problem is I have noticed that while the server is executing this process I cannot access to any page in the website, even those that don't require MySQL. I don't know why it is happening and I have no clue about how to fix it. Perhaps a more advanced PHP programmer can help me.

    Read the article

  • Has the `message-passing/shared-state' dilemma (concurrency & distribution) taken form of a `Holywar

    - by Bubba88
    I'm not too well-informed about the state of the discussion about which model is better, so I would like to ask a pretty straight question: Does it look like two opposing views having really heatened dispute? E.g. like prototype/class based OOP or dynamic vs. static typing (though these are really not much fitting examples, I just do not know how to express my question more clearly)

    Read the article

  • Parallel Tasking Concurrency with Dependencies on Python like GNU Make

    - by Brian Bruggeman
    I'm looking for a method or possibly a philosophical approach for how to do something like GNU Make within python. Currently, we utilize makefiles to execute processing because the makefiles are extremely good at parallel runs with changing single option: -j x. In addition, gnu make already has the dependency stacks built into it, so adding a secondary processor or the ability to process more threads just means updating that single option. I want that same power and flexibility in python, but I don't see it. As an example: all: dependency_a dependency_b dependency_c dependency_a: dependency_d stuff dependency_b: dependency_d stuff dependency_c: dependency_e stuff dependency_d: dependency_f stuff dependency_e: stuff dependency_f: stuff If we do a standard single thread operation (-j 1), the order of operation might be: dependency_f -> dependency_d -> dependency_a -> dependency_b -> dependency_e \ -> dependency_c For two threads (-j 2), we might see: 1: dependency_f -> dependency_d -> dependency_a -> dependency_b 2: dependency_e -> dependency_c Does anyone have any suggestions on either a package already built or an approach? I'm totally open, provided it's a pythonic solution/approach. Please and Thanks in advance!

    Read the article

  • Google App Engine - Dealing with concurrency issues of storing an object

    - by Spines
    My User object that I want to create and store in the datastore has an email, and a username. How do I make sure when creating my User object that another User object doesn't also have either the same email or the same username? If I just do a query to see if any other users have already used the username or the email, then there could be a race condition. UPDATE: The solution I'm currently considering is to use the MemCache to implement a locking mechanism. I would acquire 2 locks before trying to store the User object in the datastore. First a lock that locks based on email, then another that locks based on username. Since creating new User objects only happens at user registration time, and it's even rarer that two people try to use either the same username or the same email, I think it's okay to take the performance hit of locking. I'm thinking of using the MemCache locking code that is here: http://appengine-cookbook.appspot.com/recipe/mutex-using-memcache-api/ What do you guys think?

    Read the article

  • The simplest concurrency pattern

    - by Ilya Kogan
    Please, would you help me in reminding me of one of the simplest parallel programming techniques. How do I do the following in C#: Initial state: semaphore counter = 0 Thread 1: // Block until semaphore is signalled semaphore.Wait(); // wait until semaphore counter is 1 Thread 2: // Allow thread 1 to run: semaphore.Signal(); // increments from 0 to 1 It's not a mutex because there is no critical section, or rather you can say there is an infinite critical section. So what is it?

    Read the article

  • Concurrency and Calendar classes

    - by fbielejec
    I have a thread (class implementing runnable, called AnalyzeTree) organised around a hash map (ConcurrentMap slicesMap). The class goes through the data (called trees here) in the large text file and parses the geographical coordinates from it to the HashMap. The idea is to process one tree at a time and add or grow the values according to the key (which is just a Double value representing time). The relevant part of code looks like this: // grow map entry if key exists if (slicesMap.containsKey(sliceTime)) { double[] imputedLocation = imputeValue( location, parentLocation, sliceHeight, nodeHeight, parentHeight, rate, useTrueNoise, currentTreeNormalization, precisionArray); slicesMap.get(sliceTime).add( new Coordinates(imputedLocation[1], imputedLocation[0], 0.0)); // start new entry if no such key in the map } else { List<Coordinates> coords = new ArrayList<Coordinates>(); double[] imputedLocation = imputeValue( location, parentLocation, sliceHeight, nodeHeight, parentHeight, rate, useTrueNoise, currentTreeNormalization, precisionArray); coords.add(new Coordinates(imputedLocation[1], imputedLocation[0], 0.0)); slicesMap.putIfAbsent(sliceTime, coords); // slicesMap.put(sliceTime, coords); }// END: key check And the class is called like this (executor is ExecutorService executor = Executors.newFixedThreadPool(NTHREDS) ): mrsd = new SpreadDate(mrsdString); int readTrees = 1; while (treesImporter.hasTree()) { currentTree = (RootedTree) treesImporter.importNextTree(); executor.submit(new AnalyzeTree(currentTree, precisionString, coordinatesName, rateString, numberOfIntervals, treeRootHeight, timescaler, mrsd, slicesMap, useTrueNoise)); // new AnalyzeTree(currentTree, precisionString, // coordinatesName, rateString, numberOfIntervals, // treeRootHeight, timescaler, mrsd, slicesMap, // useTrueNoise).run(); readTrees++; }// END: while has trees Now this is running into troubles when executed in parallel (the commented part running sequentially is fine), I thought it might throw a ConcurrentModificationException, but apparently the problem is in mrsd (instance of SpreadDate object, which is simply a class for date related calculations). The SpreadDate class looks like this: public class SpreadDate { private Calendar cal; private SimpleDateFormat formatter; private Date stringdate; public SpreadDate(String date) throws ParseException { // if no era specified assume current era String line[] = date.split(" "); if (line.length == 1) { StringBuilder properDateStringBuilder = new StringBuilder(); date = properDateStringBuilder.append(date).append(" AD") .toString(); } formatter = new SimpleDateFormat("yyyy-MM-dd G", Locale.US); stringdate = formatter.parse(date); cal = Calendar.getInstance(); } public long plus(int days) { cal.setTime(stringdate); cal.add(Calendar.DATE, days); return cal.getTimeInMillis(); }// END: plus public long minus(int days) { cal.setTime(stringdate); cal.add(Calendar.DATE, -days); //line 39 return cal.getTimeInMillis(); }// END: minus public long getTime() { cal.setTime(stringdate); return cal.getTimeInMillis(); }// END: getDate } And the stack trace from when exception is thrown: java.lang.ArrayIndexOutOfBoundsException: 58 at sun.util.calendar.BaseCalendar.getCalendarDateFromFixedDate(BaseCalendar.java:454) at java.util.GregorianCalendar.computeFields(GregorianCalendar.java:2098) at java.util.GregorianCalendar.computeFields(GregorianCalendar.java:2013) at java.util.Calendar.setTimeInMillis(Calendar.java:1126) at java.util.GregorianCalendar.add(GregorianCalendar.java:1020) at utils.SpreadDate.minus(SpreadDate.java:39) at templates.AnalyzeTree.run(AnalyzeTree.java:88) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636) If a move the part initializing mrsd to the AnalyzeTree class it runs without any problems - however it is not very memory efficient to initialize class each time this thread is running, hence my concerns. How can it be remedied?

    Read the article

  • PostgreSQL, triggers, and concurrency to enforce a temporal key

    - by Hobbes
    I want to define a trigger in PostgreSQL to check that the inserted row, on a generic table, has the the property: "no other row exists with the same key in the same valid time" (the keys are sequenced keys). In fact, I has already implemented it. But since the trigger has to scan the entire table, now i'm wondering: is there a need for a table-level lock? Or this is managed someway by the PostgreSQL itself?

    Read the article

  • Groovy & Grails Concurrency ( quartz, executor )

    - by Pietro
    What I'm trying to do is to run multiple threads at some starting time. Those threads must stay alive for 90minutes after start. During the 90minutes they execute something after a random sleep time (ex: 5minutes to 15minutes). Here is a pseudo code on how I would implement it. The problem is that doing it in this way the threads run in an unexpected way. How can I implement correctly something like this? Class MyJob { static triggers = { cron name: 'first', cronExpression: "0 30 21 * * FRI" cron name: 'second', cronExpression: "0 30 19 * * FRI" cron name: 'third', cronExpression: "0 30 17 * * FRI" def myService def execute() { switch( between trigger name ) case 'first': model = Model.findByAttribute(...) ... myService.run( model, start_time ) break; ... } } class MyService { def run( model, start_time ) { def end_time = end_time.plusMinutes(90) model.fields.each( field -> Thread.start { executeSomeTasks( field, start_time, end_time ) } ) } def executeSomeTasks( field, start_time, end_time ) { while( start_time < end_time ) { ...do something ... sleep( Random.nextInt( 1000 ) ); } } }

    Read the article

  • Good concurrency example of Java vs. Clojure

    - by Michiel Borkent
    Clojure is said to be a language that makes multi-thread programming easier. From the Clojure.org website: Clojure simplifies multi-threaded programming in several ways. Now I'm looking for a non-trivial problem solved in Java and in Clojure so I can compare/contrast their simplicity. Anyone?

    Read the article

  • Database concurrency issue in .NET application

    - by MC.
    If userA deleted OrderA while userB is modifying OrderA, then userB saves OrderA then there is no order in the database to be updated. My problem is there is no error! The SqlDataAdapter.Update succeeds and returns a "1" indicating a record was modified when this is not true. Does anybody know how this is supposed to work, thanks.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >