Search Results

Search found 341 results on 14 pages for 'atomic'.

Page 11/14 | < Previous Page | 7 8 9 10 11 12 13 14  | Next Page >

  • Too complex/too many objects?

    - by Mike Fairhurst
    I know that this will be a difficult question to answer without context, but hopefully there are at least some good guidelines to share on this. The questions are at the bottom if you want to skip the details. Most are about OOP in general. Begin context. I am a jr dev on a PHP application, and in general the devs I work with consider themselves to use many more OO concepts than most PHP devs. Still, in my research on clean code I have read about so many ways of using OO features to make code flexible, powerful, expressive, testable, etc. that is just plain not in use here. The current strongly OO API that I've proposed is being called too complex, even though it is trivial to implement. The problem I'm solving is that our permission checks are done via a message object (my API, they wanted to use arrays of constants) and the message object does not hold the validation object accountable for checking all provided data. Metaphorically, if your perm containing 'allowable' and 'rare but disallowed' is sent into a validator, the validator may not know to look for 'rare but disallowed', but approve 'allowable', which will actually approve the whole perm check. We have like 11 validators, too many to easily track at such minute detail. So I proposed an AtomicPermission class. To fix the previous example, the perm would instead contain two atomic permissions, one wrapping 'allowable' and the other wrapping 'rare but disallowed'. Where previously the validator would say 'the check is OK because it contains allowable,' now it would instead say '"allowable" is ok', at which point the check ends...and the check fails, because 'rare but disallowed' was not specifically okay-ed. The implementation is just 4 trivial objects, and rewriting a 10 line function into a 15 line function. abstract class PermissionAtom { public function allow(); // maybe deny() as well public function wasAllowed(); } class PermissionField extends PermissionAtom { public function getName(); public function getValue(); } class PermissionIdentifier extends PermissionAtom { public function getIdentifier(); } class PermissionAction extends PermissionAtom { public function getType(); } They say that this is 'not going to get us anything important' and it is 'too complex' and 'will be difficult for new developers to pick up.' I respectfully disagree, and there I end my context to begin the broader questions. So the question is about my OOP, are there any guidelines I should know: is this too complicated/too much OOP? Not that I expect to get more than 'it depends, I'd have to see if...' when is OO abstraction too much? when is OO abstraction too little? how can I determine when I am overthinking a problem vs fixing one? how can I determine when I am adding bad code to a bad project? how can I pitch these APIs? I feel the other devs would just rather say 'its too complicated' than ask 'can you explain it?' whenever I suggest a new class.

    Read the article

  • Webcast On-Demand: Building Java EE Apps That Scale

    - by jeckels
    With some awesome work by one of our architects, Randy Stafford, we recently completed a webcast on scaling Java EE apps efficiently. Did you miss it? No problem. We have a replay available on-demand for you. Just hit the '+' sign drop-down for access.Topics include: Domain object caching Service response caching Session state caching JSR-107 HotCache and more! Further, we had several interesting questions asked by our audience, and we thought we'd share a sampling of those here for you - just in case you had the same queries yourself. Enjoy! What is the largest Coherence deployment out there? We have seen deployments with over 500 JVMs in the Coherence cluster, and deployments with over 1000 JVMs using the Coherence jar file, in one system. On the management side there is an ecosystem of monitoring tools from Oracle and third parties with dashboards graphing values from Coherence's JMX instrumentation. For lifecycle management we have seen a lot of custom scripting over the years, but we've also integrated closely with WebLogic to leverage its management ecosystem for deploying Coherence-based applications and managing process life cycles. That integration introduces a new Java EE archive type, the Grid Archive or GAR, which embeds in an EAR and can be seen by a WAR in WebLogic. That integration also doesn't require any extra WebLogic licensing if Coherence is licensed. How is Coherence different from a NoSQL Database like MongoDB? Coherence can be considered a NoSQL technology. It pre-dates the NoSQL movement, having been first released in 2001 whereas the term "NoSQL" was coined in 2009. Coherence has a key-value data model primarily but can also be used for document data models. Coherence manages data in memory currently, though disk persistence is in a future release currently in beta testing. Where the data is managed yields a few differences from the most well-known NoSQL products: access latency is faster with Coherence, though well-known NoSQL databases can manage more data. Coherence also has features that well-known NoSQL database lack, such as grid computing, eventing, and data source integration. Finally Coherence has had 15 years of maturation and hardening from usage in mission-critical systems across a variety of industries, particularly financial services. Can I use Coherence for local caching? Yes, you get additional features beyond just a java.util.Map: you get expiration capabilities, size-limitation capabilities, eventing capabilites, etc. Are there APIs available for GoldenGate HotCache? It's mostly a black box. You configure it, and it just puts objects into your caches. However you can treat it as a glass box, and use Coherence event interceptors to enhance its behavior - and there are use cases for that. Are Coherence caches updated transactionally? Coherence provides several mechanisms for concurrency control. If a project insists on full-blown JTA / XA distributed transactions, Coherence caches can participate as resources. But nobody does that because it's a performance and scalability anti-pattern. At finer granularity, Coherence guarantees strict ordering of all operations (reads and writes) against a single cache key if the operations are done using Coherence's "EntryProcessor" feature. And Coherence has a unique feature called "partition-level transactions" which guarantees atomic writes of multiple cache entries (even in different caches) without requiring JTA / XA distributed transaction semantics.

    Read the article

  • NServiceBus and NHibernate - Message Handler and Transactions

    - by mattcodes
    From my understanding NServiceBus executes the Handle method of an IMessageHandler within a transaction, if an exception propagates out of this method, then NServiceBus will ensure the message is put back on the message queue (up X amount of times before error queue) etc.. so we have an atomic operation so to speak. Now when if I inside my NServiceBus Message Handle method I do something like this using(var trans = session.BeginTransaction()) { person.Age = 10; session.Update<Person>(person); trans.Commit() } using(var trans2 = session.BeginTransaction()) { person.Age = 20; session.Update<Person>(person); // throw new ApplicationException("Oh no"); trans2.Commit() } What is the effect of this on the transaction scope? Is trans1 now counted as a nested transaction in terms of its relationship with the Nservicebus transaction even though we have done nothing to marry them up? (if not how would one link onto the transaction of NServiceBus? Looking at the second block (trans2), if I uncomment the throw statement, will the NServiceBus transaction then rollback trans1 as well? In basic scenarios, say I dump the above into a console app, then trans1 is independent, commit, flushed and won't rollback. I'm trying to clarify what happens now we sit in someone else's transaction like NServiceBus? The above is just example code, im wouldnt be working directly with session, more like through a uow pattern.

    Read the article

  • UIWebView rotation and memory problem

    - by Fer
    Hi. I have an UIWebView that loads a embedded XHTML like this: body = [[UIWebView alloc] initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height)]; body.scalesPageToFit = YES; body.backgroundColor = [UIColor scrollViewTexturedBackgroundColor]; body.autoresizingMask = UIViewAutoresizingFlexibleBottomMargin | UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleRightMargin; [self.view addSubview:body]; [body loadRequest:[NSURLRequest requestWithURL:[NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:@"article16" ofType:@"xhtml"]isDirectory:NO]]]; This XHTML have a lot of images, and when I rotate the device I got some memory warnings, and sometimes the app crashes. If I remove the autoresizing mask, specifically the UIViewAutoresizingFlexibleWidth (I have tryed with only UIViewAutoresizingFlexibleLeftMargin and UIViewAutoresizingFlexibleRightMargin with no problems), the memory warning stops, and the app does not crash anymore. If I remove all autoresizing mask, and set the new webView frame in didRotate or willRotate, I got the same warnings as using the UIViewAutoresizingFlexibleWidth. There's a app, called Atomic Web that could open the same XHTML and rotate with no memory warnings, and safari can open it as well, but if I create a project with only that UIWebView, the app sometimes crashes. Someone knows what this may be? Can I set the webview frame in other way? Why can't I use the autoresizing mask? Thanks for your attention.

    Read the article

  • codingbat wordEnds using regex

    - by polygenelubricants
    I'm trying to solve wordEnds from codingbat.com using regex. This is the simplest as I can make it with my current knowledge of regex: public String wordEnds(String str, String word) { return str.replaceAll( String.format( ".*?(?=%s)(?<=(.|^))%1$s(?=(.|$))|.+", java.util.regex.Pattern.quote(word) ), "$1$2" ); } String.format is used to inject word into the pattern for both readability and convenience (it's injected twice). Pattern.quote isn't necessary to pass their tests, but I think it's required for a proper regex-based solution. The regex has two major parts: If after matching as few characters as possible ".*?", word can still be found "(?=%s)", then lookbehind to capture any character immediately preceding it "(?<=(.|^))", match word "%1$s" and lookforward to capture any character following it "(?=(.|$))". The initial "if" test ensures that the atomic lookbehind captures only if there's a word Using lookahead to capture the following character doesn't consume it, so it can be used as part of further matching Otherwise match what's left "|.+" Groups 1 and 2 would capture empty strings I think this works in all cases, but it's obviously quite complex. I'm just wondering if others can suggest a simpler regex to do this. Note: I'm not looking for a solution using indexOf and a loop. I want a regex-based replaceAll solution. I also need a working solution that I can just copy-paste into codingbat and passes.

    Read the article

  • newline-ignoring diff / diff across multiple lines / reflow-ignoring diff

    - by Adam
    Does anybody know of a diff-like tool that can show me the changes between two text files, but ignore changes in whitespace including newlines? Here's an example: the quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. All I did was delete one word and reflow it, but "diff -b" detects a change on every line (as it should; I'm not saying this is a bug in diff). But for large LaTeX files this is a major problem; change one word in a long paragraph and the diff you get back is basically useless. By the way, I'm aware that this requires way more computational power than the usual lines-are-atomic diff. I'm only doing this on small human-generated files and am happy to wait a long time if I have to.

    Read the article

  • Child objects in MongoDB

    - by Jeremy B.
    I have been following along with Rob Conery's Linq for MongoDB and have come across a question. In the example he shows how you can easily nest a child object. For my current experiment I have the following structure. class Content { ... Profile Profile { get; set; } } class Profile { ... } This works great when looking at content items. The dilemma I'm facing now is if I want to treat the Profile as an atomic object. As it stands, it appears as if I can not query the Profile object directly but that it comes packaged with Content results. If I want it to be inclusive, but also be able to query on just Profile I feel like my first instinct would be to make Profiles a top level object and then create a foreign key like structure under the Content class to tie the two together. To me it feels like I'm falling back on RDBMS practices and that feels like I'm most likely going against the spirit of Mongo. How would you treat an object you need to act upon independently yet also want as a child object of another object?

    Read the article

  • Factory pattern vs ease-of-use?

    - by Curtis White
    Background, I am extending the ASP.NET Membership with custom classes and extra tables. The ASP.NET MembershipUser has a protected constructor and a public method to read the data from the database. I have extended the database structure with custom tables and associated classes. Instead of using a static method to create a new member, as in the original API: I allow the code to instantiate a simple object and fill the data because there are several entities. Original Pattern #1 Protected constructor > static CreateUser(string mydata, string, mydata, ...) > User.Data = mydata; > User.Update() My Preferred Pattern #2 Public constructor > newUser = new MembershipUser(); > newUser.data = ... > newUser.ComplextObject.Data = ... > newUser.Insert() > newUser.Load(string key) I find pattern #2 to be easier and more natural to use. But method #1 is more atomic and ensured to contain proper data. I'd like to hear any opinions on pros/cons. The problem in my mind is that I prefer a simple CRUD/object but I am, also, trying to utilize the underlying API. These methods do not match completely. For example, the API has methods, like UnlockUser() and a readonly property for the IsLockedOut

    Read the article

  • What is the most platform- and Python-version-independent way to make a fast loop for use in Python?

    - by Statto
    I'm writing a scientific application in Python with a very processor-intensive loop at its core. I would like to optimise this as far as possible, at minimum inconvenience to end users, who will probably use it as an uncompiled collection of Python scripts, and will be using Windows, Mac, and (mainly Ubuntu) Linux. It is currently written in Python with a dash of NumPy, and I've included the code below. Is there a solution which would be reasonably fast which would not require compilation? This would seem to be the easiest way to maintain platform-independence. If using something like Pyrex, which does require compilation, is there an easy way to bundle many modules and have Python choose between them depending on detected OS and Python version? Is there an easy way to build the collection of modules without needing access to every system with every version of Python? Does one method lend itself particularly to multi-processor optimisation? (If you're interested, the loop is to calculate the magnetic field at a given point inside a crystal by adding together the contributions of a large number of nearby magnetic ions, treated as tiny bar magnets. Basically, a massive sum of these.) # calculate_dipole # ------------------------- # calculate_dipole works out the dipole field at a given point within the crystal unit cell # --- # INPUT # mu = position at which to calculate the dipole field # r_i = array of atomic positions # mom_i = corresponding array of magnetic moments # --- # OUTPUT # B = the B-field at this point def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) #4pi / mu0 (at the front of the dipole eqn) A = 1e-7 #initalise dipole field B = zeros(3,float) for i in range(len(relative)): #work out the dipole field and add it to the estimate so far B += A*(3*dot(mom_i[i],r_unit[i])*r_unit[i] - mom_i[i]) / sqrt(dot(relative[i],relative[i]))**3 return B

    Read the article

  • Variant datatype library for C

    - by Joey Adams
    Is there a decent open-source C library for storing and manipulating dynamically-typed variables (a.k.a. variants)? I'm primarily interested in atomic values (int8, int16, int32, uint, strings, blobs, etc.), while JSON-style arrays and objects as well as custom objects would also be nice. A major case where such a library would be useful is in working with SQL databases. The most obvious feature of such a library would be a single type for all supported values, e.g.: struct Variant { enum Type type; union { int8_t int8_; int16_t int16_; // ... }; }; Other features might include converting Variant objects to/from C structures (using a binding table), converting values to/from strings, and integration with an existing database library such as SQLite. Note: I do not believe this is question is a duplicate of http://stackoverflow.com/questions/649649/any-library-for-generic-datatypes-in-c , which refers to "queues, trees, maps, lists". What I'm talking about focuses more on making working with SQL databases roughly as smooth as working with them in interpreted languages.

    Read the article

  • How to make Stack.Pop threadsafe

    - by user260197
    I am using the BlockingQueue code posted in this question, but realized I needed to use a Stack instead of a Queue given how my program runs. I converted it to use a Stack and renamed the class as needed. For performance I removed locking in Push, since my producer code is single threaded. My problem is how can thread working on the (now) thread safe Stack know when it is empty. Even if I add another thread safe wrapper around Count that locks on the underlying collection like Push and Pop do, I still run into the race condition that access Count and then Pop are not atomic. Possible solutions as I see them (which is preferred and am I missing any that would work better?): Consumer threads catch the InvalidOperationException thrown by Pop(). Pop() return a nullptr when _stack-Count == 0, however C++-CLI does not have the default() operator ala C#. Pop() returns a boolean and uses an output parameter to return the popped element. Here is the code I am using right now: generic <typename T> public ref class ThreadSafeStack { public: ThreadSafeStack() { _stack = gcnew Collections::Generic::Stack<T>(); } public: void Push(T element) { _stack->Push(element); } T Pop(void) { System::Threading::Monitor::Enter(_stack); try { return _stack->Pop(); } finally { System::Threading::Monitor::Exit(_stack); } } public: property int Count { int get(void) { System::Threading::Monitor::Enter(_stack); try { return _stack->Count; } finally { System::Threading::Monitor::Exit(_stack); } } } private: Collections::Generic::Stack<T> ^_stack; };

    Read the article

  • Locking issues with replacing files on a website

    - by Moe Sisko
    I want to replace existing files on an IIS website with updated versions. Say these files are large pdf documents, which can be accessed via hyperlinks. The site is up 24x7, so I'm concerned about locking issues when a file is being updated at exactly the same time that someone is trying to read the file. The files are updated using C# code run on the server. I can think of two options for opening the file for writing. Option 1) Open the file for writing, using FileShare.Read : using (FileStream stream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.Read)) While this file is open, and a user requests the same file for reading in a web browser via a hyperlink, the document opens up as a blank page. Option 2) Open the file for writing using FileShare.None : using (FileStream stream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.None)) While this file is open, and a user requests the same file for reading in a web browser via a hyperlink, the browser shows an error. In IE 8, you get HTTP 500, "The website cannot display the page", and in Firefox 3.5, you get : "The process cannot access the file because it is being used by another process." The browser behaviour kind of makes sense, and seem reasonable. I guess its highly unlikely that a user will attempt to read a file at exactly the same time you are updating it. It would be nice if somehow, the file update was atomic, like updating a database with SQL wrapped around a transaction. I'm wondering if you guys worry about this sort of thing, and prefer either of the above options, or even have other options of your own for updating files.

    Read the article

  • A simple WCF Service (POX) without complex serialization

    - by jammer59
    I'm a complete WCF novice. I'm trying to build a deploy a very, very simple IIS 7.0 hosted web service. For reasons outside of my control it must be WCF and not ASMX. It's a wrapper service for a pre-existing web application that simply does the following: 1) Receives a POST request with the request body XML-encapsulated form elements. Something like valuevalue. This is untyped XML and the XML is atomic (a form) and not a list of records/objects. 2) Add a couple of tags to the request XML and the invoke another HTTP-based service with a simple POST + bare XML -- this will actually be added by some internal SQL ops but that isn't the issue. 3) Receive the XML response from the 3rd party service and relay it as the response to the original calling client in Step 1. The clients (step 1) will be some sort of web-based scripting but could be anything .aspx, python, php, etc. I can't have SOAP and the usual WCF-based REST examples with their contracts and serialization have me confused. This seems like a very common and very simple problem conceptually. It would be easy to implement in code but IIS-hosted WCF is a requirement. Any pointers?

    Read the article

  • "Work stealing" vs. "Work shrugging"?

    - by John
    Why is it that I can find lots of information on "work stealing" and nothing on "work shrugging" as a dynamic load-balancing strategy? By "work-shrugging" I mean busy processors pushing excessive work towards less loaded neighbours rather than idle processors pulling work from busy neighbours ("work-stealing"). I think the general scalability should be the same for both strategies. However I believe that it is much more efficient for busy processors to wake idle processors if and when there is definitely work for them to do than having idle processors spinning or waking periodically to speculatively poll all neighbours for possible work. Anyway a quick google didn't show up anything under the heading of "Work Shrugging" or similar so any pointers to prior-art and the jargon for this strategy would be welcome. Clarification/Confession In more detail:- By "Work Shrugging" I actually envisage the work submitting processor (which may or may not be the target processor) being responsible for looking around the immediate locality of the preferred target processor (based on data/code locality) to decide if a near neighbour should be given the new work instead because they don't have as much work to do. I am talking about an atomic read of the immediate (typically 2 to 4) neighbours' estimated q length here. I do not think this is any more coupling than implied by the thieves polling & stealing from their neighbours - just much less often - or rather - only when it makes economic sense to do so. (I am assuming "lock-free, almost wait-free" queue structures in both strategies). Thanks.

    Read the article

  • Thread safe lazy contruction of a singleton in C++

    - by pauldoo
    Is there a way to implement a singleton object in C++ that is: Lazily constructed in a thread safe manner (two threads might simultaneously be the first user of the singleton - it should still only be constructed once). Doesn't rely on static variables being constructed beforehand (so the singleton object is itself safe to use during the construction of static variables). (I don't know my C++ well enough, but is it the case that integral and constant static variables are initialized before any code is executed (ie, even before static constructors are executed - their values may already be "initialized" in the program image)? If so - perhaps this can be exploited to implement a singleton mutex - which can in turn be used to guard the creation of the real singleton..) Excellent, it seems that I have a couple of good answers now (shame I can't mark 2 or 3 as being the answer). There appears to be two broad solutions: Use static initialisation (as opposed to dynamic initialisation) of a POD static varible, and implementing my own mutex with that using the builtin atomic instructions. This was the type of solution I was hinting at in my question, and I believe I knew already. Use some other library function like pthread_once or boost::call_once. These I certainly didn't know about - and am very grateful for the answers posted.

    Read the article

  • How to store a list in a column of a database table.

    - by John Berryman
    Howdy! So, per Mehrdad's answer to a related question, I get it that a "proper" database table column doesn't store a list. Rather, you should create another table that effectively holds the elements of said list and then link to it directly or through a junction table. However, the type of list I want to create will be composed of unique items (unlike the linked question's fruit example). Furthermore, the items in my list are explicitly sorted - which means that if I stored the elements in another table, I'd have to sort them every time I accessed them. Finally, the list is basically atomic in that any time I wish to access the list, I will want to access the entire list rather than just a piece of it - so it seems silly to have to issue a database query to gather together pieces of the list. AKX's solution (linked above) is to serialize the list and store it in a binary column. But this also seems inconvenient because it means that I have to worry about serialization and deserialization. Is there any better solution? If there is no better solution, then why? It seems that this problem should come up from time to time. ... just a little more info to let you know where I'm coming from. As soon as I had just begun understanding SQL and databases in general, I was turned on to LINQ to SQL, and so now I'm a little spoiled because I expect to deal with my programming object model without having to think about how the objects are queried or stored in the database. Thanks All! John

    Read the article

  • javascript scope problem when lambda function refers to a variable in enclosing loop

    - by Stefan Blixt
    First question on stackoverflow :) Hope I won't embarrass myself... I have a javascript function that loads a list of albums and then it creates a list item for each album. The list item should be clickable, so I call jQuery's click() with a function that does stuff. I do this in a loop. My problem is that all items seem to get the same click function, even though I try to make a new one that does different stuff in each iteration. Another possibility is that the iteration variable is global somehow, and the function refers to it. Code below. debug() is just an encapsulation of Firebug's console.debug(). function processAlbumList(data, c) { for (var album in data) { var newAlbum = $('<li class="albumLoader">' + data[album].title + '</li>').clone(); var clickAlbum = function() { debug("contents: " + album); }; debug("Album: " + album + "/" + data[album].title); $('.albumlist').append(newAlbum); $(newAlbum).click(clickAlbum); } } Here is a transcript of what it prints when the above function runs, after that are some debug lines caused by me clicking on different items. It always prints "10", which is the last value that the album variable takes (there are 10 albums). Album: 0/Live on radio.electro-music.com Album: 1/Doodles Album: 2/Misc Stuff Album: 3/Drawer Collection Album: 4/Misc Electronic Stuff Album: 5/Odds & Ends Album: 6/Tumbler Album: 7/Bakelit 32 Album: 8/Film Album: 9/Bakelit Album: 10/Slow Zoom/Atomic Heart contents: 10 contents: 10 contents: 10 contents: 10 contents: 10 Any ideas? Driving me up the wall, this is. :) /Stefan

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

  • What information should a SVN/Versioned file commit comment contain?

    - by RenderIn
    I'm curious what kind of content should be in a versioned file commit comment. Should it describe generally what changed (e.g. "The widget screen was changed to display only active widgets") or should it be more specific (e.g. "A new condition was added to the where clause of the fetchWidget query to retrieve only active widgets by default") How atomic should a single commit be? Just the file containing the updated query in a single commit (e.g. "Updated the widget screen to display only active widgets by default"), or should that and several other changes + interface changes to a screen share the same commit with a more general description like ("Updated the widget screen: A) display only active widgets by default B) added button to toggle showing inactive widgets") I see subversion commit comments being used very differently and was wondering what others have had success with. Some comments are as brief as "updated files", while others are many paragraphs long, and others are formatted in a way that they can be queried and associated with some external system such as JIRA. I used to be extremely descriptive of the reason for the change as well as the specific technical changes. Lately I've been scaling back and just giving a general "This is what I changed on this page" kind of comment.

    Read the article

  • volatile keyword seems to be useless?

    - by Finbarr
    import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicInteger; public class Main implements Runnable { private final CountDownLatch cdl1 = new CountDownLatch(NUM_THREADS); private volatile int bar = 0; private AtomicInteger count = new AtomicInteger(0); private static final int NUM_THREADS = 25; public static void main(String[] args) { Main main = new Main(); for(int i = 0; i < NUM_THREADS; i++) new Thread(main).start(); } public void run() { int i = count.incrementAndGet(); cdl1.countDown(); try { cdl1.await(); } catch (InterruptedException e1) { e1.printStackTrace(); } bar = i; if(bar != i) System.out.println("Bar not equal to i"); else System.out.println("Bar equal to i"); } } Each Thread enters the run method and acquires a unique, thread confined, int variable i by getting a value from the AtomicInteger called count. Each Thread then awaits the CountDownLatch called cdl1 (when the last Thread reaches the latch, all Threads are released). When the latch is released each thread attempts to assign their confined i value to the shared, volatile, int called bar. I would expect every Thread except one to print out "Bar not equal to i", but every Thread prints "Bar equal to i". Eh, wtf does volatile actually do if not this?

    Read the article

  • Theory: "Lexical Encoding"

    - by _ande_turner_
    I am using the term "Lexical Encoding" for my lack of a better one. A Word is arguably the fundamental unit of communication as opposed to a Letter. Unicode tries to assign a numeric value to each Letter of all known Alphabets. What is a Letter to one language, is a Glyph to another. Unicode 5.1 assigns more than 100,000 values to these Glyphs currently. Out of the approximately 180,000 Words being used in Modern English, it is said that with a vocabulary of about 2,000 Words, you should be able to converse in general terms. A "Lexical Encoding" would encode each Word not each Letter, and encapsulate them within a Sentence. // An simplified example of a "Lexical Encoding" String sentence = "How are you today?"; int[] sentence = { 93, 22, 14, 330, QUERY }; In this example each Token in the String was encoded as an Integer. The Encoding Scheme here simply assigned an int value based on generalised statistical ranking of word usage, and assigned a constant to the question mark. Ultimately, a Word has both a Spelling & Meaning though. Any "Lexical Encoding" would preserve the meaning and intent of the Sentence as a whole, and not be language specific. An English sentence would be encoded into "...language-neutral atomic elements of meaning ..." which could then be reconstituted into any language with a structured Syntactic Form and Grammatical Structure. What are other examples of "Lexical Encoding" techniques? If you were interested in where the word-usage statistics come from : http://www.wordcount.org

    Read the article

  • Is Stream.Write thread-safe?

    - by Mike Spross
    I'm working on a client/server library for a legacy RPC implementation and was running into issues where the client would sometimes hang when waiting to a receive a response message to an RPC request message. It turns out the real problem was in my message framing code (I wasn't handling message boundaries correctly when reading data off the underlying NetworkStream), but it also made me suspicious of the code I was using to send data across the network, specifically in the case where the RPC server sends a large amount of data to a client as the result of a client RPC request. My send code uses a BinaryWriter to write a complete "message" to the underlying NetworkStream. The RPC protocol also implements a heartbeat algorithm, where the RPC server sends out PING messages every 15 seconds. The pings are sent out by a separate thread, so, at least in theory, a ping can be sent while the server is in the middle of streaming a large response back to a client. Suppose I have a Send method as follows, where stream is a NetworkStream: public void Send(Message message) { //Write the message to a temporary stream so we can send it all-at-once MemoryStream tempStream = new MemoryStream(); message.WriteToStream(tempStream); //Write the serialized message to the stream. //The BinaryWriter is a little redundant in this //simplified example, but here because //the production code uses it. byte[] data = tempStream.ToArray(); BinaryWriter bw = new BinaryWriter(stream); bw.Write(data, 0, data.Length); bw.Flush(); } So the question I have is, is the call to bw.Write (and by implication the call to the underlying Stream's Write method) atomic? That is, if a lengthy Write is still in progress on the sending thread, and the heartbeat thread kicks in and sends a PING message, will that thread block until the original Write call finishes, or do I have to add explicit synchronization to the Send method to prevent the two Send calls from clobbering the stream?

    Read the article

  • Using SQL dB column as a lock for concurrent operations in Entity Framework

    - by Sid
    We have a long running user operation that is handled by a pool of worker processes. Data input and output is from Azure SQL. The master Azure SQL table structure columns are approximated to [UserId, col1, col2, ... , col N, beingProcessed, lastTimeProcessed ] beingProcessed is boolean and lastTimeProcessed is DateTime. The logic in every worker role is: public void WorkerRoleMain() { while(true) { try { dbContext db = new dbContext(); // Read foreach (UserProfile user in db.UserProfile .Where(u => DateTime.UtcNow.Subtract(u.lastTimeProcessed) > TimeSpan.FromHours(24) & u.beingProcessed == false)) { user.beingProcessed = true; // Modify db.SaveChanges(); // Write // Do some long drawn processing here ... ... ... user.lastTimeProcessed = DateTime.UtcNow; user.beingProcessed = false; db.SaveChanges(); } } catch(Exception ex) { LogException(ex); Sleep(TimeSpan.FromMinutes(5)); } } // while () } With multiple workers processing as above (each with their own Entity Framework layer), in essence beingProcessed is being used a lock for MutEx purposes Question: How can I deal with concurrency issues on the beingProcessed "lock" itself based on the above load? I think read-modify-write operation on the beingProcessed needs to be atomic but I'm open to other strategies. Open to other code refinements too.

    Read the article

  • How can the Three-Phase Commit Protocol (3PC) guarantee atomicity?

    - by AndiDog
    I'm currently exploring worst case scenarios of atomic commit protocols like 2PC and 3PC and am stuck at the point that I can't find out why 3PC can guarantee atomicity. That is, how does it guarantee that if cohort A commits, cohort B also commits? Here's the simplified 3PC from the Wikipedia article: Now let's assume the following case: Two cohorts participate in the transaction (A and B) Both do their work, then vote for commit Coordinator now sends precommit messages... A receives the precommit message, acknowledges, and then goes offline for a long time B doesn't receive the precommit message (whatever the reason might be) and is thus still in "uncertain" state The results: Coordinator aborts the transaction because not all precommit messages were sent and acknowledged successfully A, who is in precommit state, is still offline, thus times out and commits B aborts in any case: He either stays offline and times out (causes abort) or comes online and receives the abort command from the coordinator And there you have it: One cohort committed, another aborted. The transaction is screwed. So what am I missing here? In my understanding, if the automatic commit on timeout (in precommit state) was replaced by infinitely waiting for a coordinator command, that case should work fine.

    Read the article

  • What is an efficient strategy for multiple threads posting jobs and waiting for response from a single thread?

    - by jakewins
    In java, what is an efficient solution to the following problem: I have multiple threads (10-20 or so) generating jobs ("Job Creators"), and a single thread capable of performing them ("The worker"). Once a job creator has posted a job, it should wait for the job to finish, yielding no result other than "it's done", before it keeps going. For sending the jobs to the worker thread, I think a ring buffer or similar standard fan-in setup would perhaps be a good approach? But for a Job Creator to find out that her job has been done, I'm not so sure.. The job creators could sleep, and the worker interrupt them when done.. Or each job creator could have an atomic boolean that it checks, and that the worker sets. I dunno, neither of those feel very nice. I'd like to do it with as few (none, if possible) locks as absolutely possible. So to be clear: What I'm looking for is speed, not necessarily simplicity. Does anyone have any suggestions? Links to reading about concurrency strategies would also be very welcome!

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14  | Next Page >