Search Results

Search found 615 results on 25 pages for 'eventual consistency'.

Page 3/25 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Is data=journal on a separate device on Ext4 as good as using a RAID controller with battery backed cache for file system consistency?

    - by Jeff Strunk
    It seems to me that data=journal prevents file system inconsistency in the case of power failure. Using it with a dedicated journal device mitigates the performance penalty of writing the data twice. A power outage would still lose the data that is currently being written to the journal, but the file system on disk would always be consistent. If that amount of loss is acceptable, is a RAID controller with battery backed cache really worthwhile?

    Read the article

  • i want to have some cross browser consistency on my fieldsets, do you know how can i do it?

    - by Omar
    i have this problem with fieldsets... have a look at http://i.imgur.com/IRrXB.png is it possible to achieve what i want with css??? believe me, i tried! as you can see on the img, i just want the look of the legend to be consistent across browsers, i want it to use the width of the fieldset no more (like chrome and ie) no less (like firefox), dont worry about the rounded corners and other issues, thats taken care of. heres the the core i'm using. CSS <style type="text/css"> fieldset {margin: 0 0 10px 0;padding: 0; border:1px solid silver; background-color: #f9f9f9; -moz-border-radius:5px; -webkit-border-radius:5px; border-radius:5px} fieldset p{clear:both;margin:.3em 0;overflow:hidden;} fieldset label{float:left;width:140px;display:block;text-align:right;padding-right:8px;margin-right: 2px;color: #4a4a4a;} fieldset input, fieldset textarea {margin:0;border:1px solid #ddd;padding:3px 5px 3px 5px;} fieldset legend { background: #C6D1E8; position:relative; left: -1px; margin: 0; width: 100%; padding: 0px 5px; font-size: 1.11em; font-weight: bold; text-align:left; border: 1px solid silver; -webkit-border-top-left-radius: 5px; -webkit-border-top-right-radius: 5px; -moz-border-radius-topleft: 5px; -moz-border-radius-topright: 3px; border-top-left-radius: 5px; border-top-right-radius: 5px; } #md {width: 400px;} </style> HTML <div id="md"> <fieldset> <legend>some title</legend> <p> <label>Login</label> <input type="text" /> </p> <p> <label>Password</label> <input type="text" /> </p> <p><label>&nbsp;</label> <input type="submit"> </p> </fieldset> </div>

    Read the article

  • List of Django model instance foreign keys losing consistency during state changes.

    - by Joshua
    I have model, Match, with two foreign keys: class Match(model.Model): winner = models.ForeignKey(Player) loser = models.ForeignKey(Player) When I loop over Match I find that each model instance uses a unique object for the foreign key. This ends up biting me because it introduces inconsistency, here is an example: >>> def print_elo(match_list): ... for match in match_list: ... print match.winner.id, match.winner.elo ... print match.loser.id, match.loser.elo ... >>> print_elo(teacher_match_list) 4 1192.0000000000 2 1192.0000000000 5 1208.0000000000 2 1192.0000000000 5 1208.0000000000 4 1192.0000000000 >>> teacher_match_list[0].winner.elo = 3000 >>> print_elo(teacher_match_list) 4 3000 # Object 4 2 1192.0000000000 5 1208.0000000000 2 1192.0000000000 5 1208.0000000000 4 1192.0000000000 # Object 4 >>> I solved this problem like so: def unify_refrences(match_list): """Makes each unique refrence to a model instance non-unique. In cases where multiple model instances are being used django creates a new object for each model instance, even if it that means creating the same instance twice. If one of these objects has its state changed any other object refrencing the same model instance will not be updated. This method ensure that state changes are seen. It makes sure that variables which hold objects pointing to the same model all hold the same object. Visually this means that a list of [var1, var2] whose internals look like so: var1 --> object1 --> model1 var2 --> object2 --> model1 Will result in the internals being changed so that: var1 --> object1 --> model1 var2 ------^ """ match_dict = {} for match in match_list: try: match.winner = match_dict[match.winner.id] except KeyError: match_dict[match.winner.id] = match.winner try: match.loser = match_dict[match.loser.id] except KeyError: match_dict[match.loser.id] = match.loser My question: Is there a way to solve the problem more elegantly through the use of QuerySets without needing to call save at any point? If not, I'd like to make the solution more generic: how can you get a list of the foreign keys on a model instance or do you have a better generic solution to my problem? Please correct me if you think I don't understand why this is happening.

    Read the article

  • How to compare 2 complex spreadsheets running in parallel for consistency with each other?

    - by tbone
    I am working on converting a large number of spreadsheets to use a new 3rd party data access library (converting from third party library #1 to third party library #2). fyi: a call to a UDF (user defined function) is placed in a cell, and when that is refreshed, it pulls the data into a pivot table below the formula. Both libraries behave the same and produce the same output, except, small irregularites can arise, such as an additional field being shown in the output pivot table using library #2, which can affect formulas on the sheet if data is being read from the pivot table without using GetPivotData. So I have ~100 of these very complicated (20+ worksheets per workbook) spreadsheets that I have to convert, and run in parallel for a period of time, to see if the output using the new data access library matches the old library. Is there some clever approach to do this, so I don't have to spend a large amount of time analyzing each sheet to determine the specific elements to compare? Two rough ideas that come to mind: 1. just create a Validator workbook that has the same # of worksheets, and simply do a Worbook1!Worksheet1!A1 - Worbook2!Worksheet3!A1 for every possible cell on each sheet 2. roughly the equivalent of #1, but just traverse the cells in the 2 books using VBA, and log any cells that do not match. I don't particularly like either idea, can anyone think of something better than this, maybe some 3rd party utility I could buy?

    Read the article

  • How to ensure consistency of enums in Java serialization?

    - by Uri
    When I serialize an object, I can use the serialVersionUID mechanism at the class level to ensure the compatibility of the two types. However, what happens when I serialize fields of enum values? Is there a way to ensure that the enum type has not been manipulated between serialization and deserialization? Suppose that I have an enum like OperationResult {SUCCESS, FAIL}, and a field called "result" in an object that is being serialized. How do I ensure, when the object is deserialized, that result is still correct even if someone maliciously reversed the two? (Suppose the enum is declared elsewhere as a static enum) I am wondering out of curiosity - I use jar-level authentication to prevent manipulation.

    Read the article

  • Big Data – Buzz Words: What is NoSQL – Day 5 of 21

    - by Pinal Dave
    In yesterday’s blog post we explored the basic architecture of Big Data . In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – NoSQL. What is NoSQL? NoSQL stands for Not Relational SQL or Not Only SQL. Lots of people think that NoSQL means there is No SQL, which is not true – they both sound same but the meaning is totally different. NoSQL does use SQL but it uses more than SQL to achieve its goal. As per Wikipedia’s NoSQL Database Definition – “A NoSQL database provides a mechanism for storage and retrieval of data that uses looser consistency models than traditional relational databases.“ Why use NoSQL? A traditional relation database usually deals with predictable structured data. Whereas as the world has moved forward with unstructured data we often see the limitations of the traditional relational database in dealing with them. For example, nowadays we have data in format of SMS, wave files, photos and video format. It is a bit difficult to manage them by using a traditional relational database. I often see people using BLOB filed to store such a data. BLOB can store the data but when we have to retrieve them or even process them the same BLOB is extremely slow in processing the unstructured data. A NoSQL database is the type of database that can handle unstructured, unorganized and unpredictable data that our business needs it. Along with the support to unstructured data, the other advantage of NoSQL Database is high performance and high availability. Eventual Consistency Additionally to note that NoSQL Database may not provided 100% ACID (Atomicity, Consistency, Isolation, Durability) compliance.  Though, NoSQL Database does not support ACID they provide eventual consistency. That means over the long period of time all updates can be expected to propagate eventually through the system and data will be consistent. Taxonomy Taxonomy is the practice of classification of things or concepts and the principles. The NoSQL taxonomy supports column store, document store, key-value stores, and graph databases. We will discuss the taxonomy in detail in later blog posts. Here are few of the examples of the each of the No SQL Category. Column: Hbase, Cassandra, Accumulo Document: MongoDB, Couchbase, Raven Key-value : Dynamo, Riak, Azure, Redis, Cache, GT.m Graph: Neo4J, Allegro, Virtuoso, Bigdata As of now there are over 150 NoSQL Database and you can read everything about them in this single link. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • View Link inConsistency

    - by Abhishek Dwivedi
    What is View Link Consistency? When multiple instances (say VO1, VO2, VO3 etc) of an EO-based VO are based on the same underlying EO, a new row created in one of these VO instances (say VO1)can be automatically added (without re-query) to the row sets of the others (VO2, VO3 etc ). This capability is known as the view link consistency. This feature works for any VO for which it is enabled, regardless of whether they are involved in a view link or not. What causes View Link inConsistency? Unless jbo.viewlink.consistent  is disabled for this VO (or globally), or setAssociationConsistent(false) is applied, any of the following can cause View Link inConsistency.  1. setWhereClause 2. Unreferenced secondary EO 3. findByViewCriteria() 4. Using view link accessor row set Why does this happen - View Link inConsistency? Well, there can be one of the following reasons. a. In case of 1 & 2, the view link consistency flag is disabled on that view object. b. As far as 3 is concerned, findByViewCriteria is used to retrieve a new row set to process programmatically without changing the contents of the default row set. In this case, unlike previous cases, the view link consistency flag is not disabled, meaning that the changes in the default row set would be reflected in the new row set.  However, the opposite doesn't hold true. For instance, if a row is deleted from this new row set, the corresponding row in the default row set does not get deleted. In one of my features, which involved deletion of row(s), I resolved the view link inconsistency issue by replacing findByViewCriteria by applyViewCriteria. b. For 4, it's similar to 3 - whenever a view link accessor row set is retrieved, a new row set is created. Now, creating new row set does not mean re-executing the query each time, only creating a new instance of a RowSet object with its default iterator reset to the "slot" before the first row. Also, please note that this new row set always originates from an internally created view object instance, not one you that added to the data model. This internal view object instance is created as needed and added with a system-defined name to the root application module. Anyway, the very reason a distinct, internally-created view object instance is used is to guarantee that it remains unaffected by developer-related changes to their own view objects instances in the data model.

    Read the article

  • Can't shrink Windows Boot NTFS disk: ERROR(5): Could not map attribute 0x80 in inode, Input/output error

    - by arcyqwerty
    Ubuntu 12.04 LTS, all updates current as of 7/3/2012 gksudo gparted Shrink /dev/sda2 from 367GB to 307GB GParted 0.11.0 --enable-libparted-dmraid Libparted 2.3 Shrink /dev/sda2 from 367.00 GiB to 307.00 GiB 00:32:57 ( ERROR ) calibrate /dev/sda2 00:00:00 ( SUCCESS ) path: /dev/sda2 start: 20,484,096 end: 790,142,975 size: 769,658,880 (367.00 GiB) check file system on /dev/sda2 for errors and (if possible) fix them 00:00:53 ( SUCCESS ) ntfsresize -P -i -f -v /dev/sda2 ntfsresize v2012.1.15AR.1 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 394065338880 bytes (394066 MB) Current device size: 394065346560 bytes (394066 MB) Checking for bad sectors ... Checking filesystem consistency ... Accounting clusters ... Space in use : 327950 MB (83.2%) Collecting resizing constraints ... Estimating smallest shrunken size supported ... File feature Last used at By inode $MFT : 389998 MB 0 Multi-Record : 394061 MB 386464 $MFTMirr : 314823 MB 1 Compressed : 394064 MB 1019521 Sparse : 330887 MB 752454 Ordinary : 393297 MB 706060 You might resize at 327949758464 bytes or 327950 MB (freeing 66116 MB). Please make a test run using both the -n and -s options before real resizing! shrink file system 00:32:04 ( ERROR ) run simulation 00:32:04 ( ERROR ) ntfsresize -P --force --force /dev/sda2 -s 329640837119 --no-action ntfsresize v2012.1.15AR.1 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 394065338880 bytes (394066 MB) Current device size: 394065346560 bytes (394066 MB) New volume size : 329640829440 bytes (329641 MB) Checking filesystem consistency ... Accounting clusters ... Space in use : 327950 MB (83.2%) Collecting resizing constraints ... Needed relocations : 13300525 (54479 MB) Schedule chkdsk for NTFS consistency check at Windows boot time ... Resetting $LogFile ... (this might take a while) Relocating needed data ... Updating $BadClust file ... Updating $Bitmap file ... ERROR(5): Could not map attribute 0x80 in inode 1667593: Input/output error ======================================== Windows has run chkdsk successfully (on boot) several times now

    Read the article

  • Azure, SLAs and CAP theorem

    - by dayscott
    Azure itself is imo PaaS and not IaaS. Do you agree? MS gurantees an availability of 99% and a strong consistency. You can find MS SLAs here: http://www.microsoft.com/windowsazure/sla (three SLAs Uptime: http://img229.imageshack.us/img229/4889/unbenanntqt.png ) I can't find anyhing about how they are going to archive that. Do they do backups? If Yes: How do they manage consistency? According to the Cap theorem (http://camelcase.blogspot.com/2007/08/cap-theorem.html ) their claims are not realistic. 2.1 Do you know detailed technical stuff about the how they are going to realize the claims about consistency and availability? On the MS page you'll find three SLAs .docs, one for SQL Azure, the second for Azure AppFabric/.Net Services and the third for Azure Compute&Storage.(Screenshot in 1.) How can one track whether SLAs are violated? Do they offer some sort of monitor, so I don't have to measure the uptime by myself?

    Read the article

  • Articles about replication schemes/algorithms?

    - by jkff
    I'm designing an hierarchical distributed system (every node has zero or more "master" nodes to which it propagates its current data). The data gets continuously updated and I'd like to guarantee that at least N nodes have almost-current data at any given time. I do not need complete consistency, only eventual consistency (t.i. for any time instant, the current snapshot of data should eventually appear on at least N nodes. It is tricky to define the term "current" here, but still). Nodes may fail and go back up at any moment, and there is no single "central" node. O overflowers! Point me to some good papers describing replication schemes. I've so far found one: Consistency Management in Optimistic Replication Algorithms

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

  • Predicting advantages of database denormalization

    - by Janus Troelsen
    I was always taught to strive for the highest Normal Form of database normalization, and we were taught Bernstein's Synthesis algorithm to achieve 3NF. This is all very well and it feels nice to normalize your database, knowing that fields can be modified while retaining consistency. However, performance may suffer. That's why I am wondering whether there is any way to predict the speedup/slowdown when denormalizing. That way, you can build your list of FD's featuring 3NF and then denormalize as little as possible. I imagine that denormalizing too much would waste space and time, because e.g. giant blobs are duplicated or it because harder to maintain consistency because you have to update multiple fields using a transaction. Summary: Given a 3NF FD set, and a set of queries, how do I predict the speedup/slowdown of denormalization? Link to papers appreciated too.

    Read the article

  • Poll Results: Foreign Key Constraints

    - by Darren Gosbell
    A few weeks ago I did the following post asking people – if they used foreign key constraints in their star schemas. The poll is still open if you are interested in adding to it, but here is what the chart looks like as of today. (at the bottom of the poll itself there is a link to the live results, unfortunately I cannot link the live results in here as the blogging platform blocks the required javascript)   Interestingly the results are fairly even. Of the 78 respondents, fractionally over half at least aim to start with referential integrity in their star schemas. I did not want to influence the results by sharing my opinion, but my personal preference is to always aim to have foreign key constraints. But at the same time, I am pragmatic about it, I do have projects where for various reasons some constraints are not defined. And I also have other designs that I have inherited, where it would just be too much work to go back and add foreign key constraints. If you are going to implement foreign keys in your star schema, they really need to be there at the start. In fact this poll was was the result of a feature request for BIDSHelper asking for a feature to check for null/missing foreign keys and I am entirely convinced that BIDS is the wrong place for this sort of functionality. BIDS is a design tool, your data needs to be constantly checked for consistency. It's not that I think that it's impossible to get a design working without foreign key constraints, but I like the idea of failing as soon as possible if there is an error and enforcing foreign key constraints lets me "fail early" if there are constancy issues with my data. By far the biggest concern with foreign keys is performance and I suppose I'm curious as to how often people actually measure and quantify this. I worked on a project a number of years ago that had very large data volumes and we did find that foreign key constraints did have a measurable impact, but what we did was to disable the constraints before loading the data, then enabled and checked them afterwards. This saved as time (although not as much as not having constraints at all), but still let us know early in the process if there were any consistency issues. For the people that do not have consistent data, if you have ETL processes that you control that are building your star schema which you also control, then to be blunt you only have yourself to blame. It is the job of the ETL process to make the data consistent. There are techniques for handling situations like missing data as well as  early and late arriving data. Ralph Kimball's book – The Data Warehouse Toolkit goes through some design patterns for handling data consistency. Having foreign key relationships can also help the relational engine to optimize queries as noted in this recent blog post by Boyan Penev

    Read the article

  • Pushing complete notifications to client

    - by ton.yeung
    So with cqrs, we accept that consistency is eventual. However, that doesn't mean that the user has to continually poll, or that eventual means an update has to take more then 500ms to sync. For the sake of UX, we want to at least give the illusion of consistency, or if not possible, be as transparent as possible. With that in mind, I have this setup: angularjs web client, consumes webapi restful services, sends commands to nservicebus command handlers, saves to neventstore, dispatches events to nservicebus event handlers, sends message to signalr hub, sends notifications to angularjs web client so with that setup, theoretically some initiates a request the server validates the request sends out the necessary commands In the mean time the client gets a 200 response updates the view: working on it gets message sometime later: done, here's the updated data Here's where things get interesting, each command could spawn multiple events. Not sure if this is a serious no, no, or not, but that's how it is currently. For example, a new customer spawns CustomerIDCreated, CustomerNameUpdated, CustomerAddressUpdated, etc... Which event handler needs to notify the client? Should all of them in a progress bar style update?

    Read the article

  • SQL SERVER – Guest Post – Jacob Sebastian – Filestream – Wait Types – Wait Queues – Day 22 of 28

    - by pinaldave
    Jacob Sebastian is a SQL Server MVP, Author, Speaker and Trainer. Jacob is one of the top rated expert community. Jacob wrote the book The Art of XSD – SQL Server XML Schema Collections and wrote the XML Chapter in SQL Server 2008 Bible. See his Blog | Profile. He is currently researching on the subject of Filestream and have submitted this interesting article on the very subject. What is FILESTREAM? FILESTREAM is a new feature introduced in SQL Server 2008 which provides an efficient storage and management option for BLOB data. Many applications that deal with BLOB data today stores them in the file system and stores the path to the file in the relational tables. Storing BLOB data in the file system is more efficient that storing them in the database. However, this brings up a few disadvantages as well. When the BLOB data is stored in the file system, it is hard to ensure transactional consistency between the file system data and relational data. Some applications store the BLOB data within the database to overcome the limitations mentioned earlier. This approach ensures transactional consistency between the relational data and BLOB data, but is very bad in terms of performance. FILESTREAM combines the benefits of both approaches mentioned above without the disadvantages we examined. FILESTREAM stores the BLOB data in the file system (thus takes advantage of the IO Streaming capabilities of NTFS) and ensures transactional consistency between the BLOB data in the file system and the relational data in the database. For more information on the FILESTREAM feature, visit: http://beyondrelational.com/filestream/default.aspx FILESTREAM Wait Types Since this series is on the different SQL Server wait types, let us take a look at the various wait types that are related to the FILESTREAM feature. FS_FC_RWLOCK This wait type is generated by FILESTREAM Garbage Collector. This occurs when Garbage collection is disabled prior to a backup/restore operation or when a garbage collection cycle is being executed. FS_GARBAGE_COLLECTOR_SHUTDOWN This wait type occurs when during the cleanup process of a garbage collection cycle. It indicates that that garbage collector is waiting for the cleanup tasks to be completed. FS_HEADER_RWLOCK This wait type indicates that the process is waiting for obtaining access to the FILESTREAM header file for read or write operation. The FILESTREAM header is a disk file located in the FILESTREAM data container and is named “filestream.hdr”. FS_LOGTRUNC_RWLOCK This wait type indicates that the process is trying to perform a FILESTREAM log truncation related operation. It can be either a log truncate operation or to disable log truncation prior to a backup or restore operation. FSA_FORCE_OWN_XACT This wait type occurs when a FILESTREAM file I/O operation needs to bind to the associated transaction, but the transaction is currently owned by another session. FSAGENT This wait type occurs when a FILESTREAM file I/O operation is waiting for a FILESTREAM agent resource that is being used by another file I/O operation. FSTR_CONFIG_MUTEX This wait type occurs when there is a wait for another FILESTREAM feature reconfiguration to be completed. FSTR_CONFIG_RWLOCK This wait type occurs when there is a wait to serialize access to the FILESTREAM configuration parameters. Waits and Performance System waits has got a direct relationship with the overall performance. In most cases, when waits increase the performance degrades. SQL Server documentation does not say much about how we can reduce these waits. However, following the FILESTREAM best practices will help you to improve the overall performance and reduce the wait types to a good extend. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology Tagged: Filestream

    Read the article

  • What open source database platform is most easily transferred from my personal machine into a window

    - by Tom
    I would like eventual interaction with MS Dynamics SL and/or MindTouch Core (running on WMware) for eventual intranet and/or internet display. I guess I am asking for front and back end recommendations for a database I am constructing, but since this is my first major project I would greatly appreciate any help and advice. I would also love an opportunity to learn a new language so the code base could be in any language. I do have a few more related questions for discussion; What is the viability of using Google hosting to provide the service to the public for free? Should I implement plone or another CMS if I have a large amount of output? Is there a structuring questionnaire or standards publication I could reference? Does UML diagramming provide additional options for portability? Thank you.

    Read the article

  • OpenVPN Problems, connecting with Windows OpenVPC client to Linux OpenVPN

    - by Filip Ekberg
    After following this guide and connecting to the VPN Server, I get the following error: Sat Mar 06 19:43:08 2010 us=127000 NOTE: failed to obtain options consistency info from peer -- this could occur if the remote peer is running a version of OpenVPN before 1.5-beta8 or if there is a network connectivity problem, and will not necessarily prevent OpenVPN from running (0 bytes received from peer, 0 bytes authenticated data channel traffic) -- you can disable the options consistency check with --disable-occ. I am using Archlinux and installed openvpn with Pacman. I want to acheive the following: Connect to the VPN Server, being able to route certain made up hosts through it. Is this possible? openvpn --version gives me the following openvpn --version OpenVPN 2.1.1 i686-pc-linux-gnu [SSL] [LZO2] [EPOLL] built on Jan 31 2010 Originally developed by James Yonan Copyright (C) 2002-2009 OpenVPN Technologies, Inc. <[email protected]> Suggestions?

    Read the article

  • OpenVPN Problems, connecting with Windows OpenVPC client to Linux OpenVPN

    - by Filip Ekberg
    After following this guide and connecting to the VPN Server, I get the following error: Sat Mar 06 19:43:08 2010 us=127000 NOTE: failed to obtain options consistency info from peer -- this could occur if the remote peer is running a version of OpenVPN before 1.5-beta8 or if there is a network connectivity problem, and will not necessarily prevent OpenVPN from running (0 bytes received from peer, 0 bytes authenticated data channel traffic) -- you can disable the options consistency check with --disable-occ. I am using Archlinux and installed openvpn with Pacman. I want to acheive the following: Connect to the VPN Server, being able to route certain made up hosts through it. Is this possible? openvpn --version gives me the following openvpn --version OpenVPN 2.1.1 i686-pc-linux-gnu [SSL] [LZO2] [EPOLL] built on Jan 31 2010 Originally developed by James Yonan Copyright (C) 2002-2009 OpenVPN Technologies, Inc. <[email protected]> Suggestions?

    Read the article

  • Attention NYC Area Marketers: Don't Miss This Executive Breakfast on Brand Building in the Digital Era

    - by Christie Flanagan
    Presenting and Managing Digital Content – A New Approach Reach Your Audiences Where They Are with Multi-Channel Marketing Attention marketers in the greater New York City area! Oracle Platinum Partner, Bluenog, invites you to an executive breakfast seminar on brand building in the digital era. In an age where consumers are spending increasing amounts of their time online, interacting, communicating and being influenced by other brands, you too must go online with a coordinated plan. And, given the hundreds, if not thousands, of places that might be relevant, having the right content and the right tools are critical. This two-part presentation will focus on the growing need for content and connection in building and maintaining your brand, as well as the role of technology in helping you maintain brand consistency, reach and interaction while simplifying delivery to web, tablet, mobile, and social audiences. Location Oracle Offices 520 Madison Ave, 30th Floor New York, NY  10022 Day/Time Thursday, May 3, 2012 9:30 AM to 11:30 AM About the Speakers Agenda: Michelle Pujadas, is an award-winning marketer and communicator, who has worked with more than 125 companies to help them package, launch and expand their brand presence, online and off. Michelle is the Founder and co-CEO of Zer0 to 5ive, a strategic marketing and communications firm that focuses on B2B and B2C technology companies, with offices in NY, Philadelphia and Chicago. Peter Conrad, the E 2.0 Practice Director for Bluenog, focuses on translating exciting visions for user experiences into well executed technical implementations leveraging advanced WebCenter technology from Oracle. Bluenog provides the systems and professional services today's forward-looking marketing organizations need to convert content, business capabilities, and communications into productive interactions with customers and prospects. 09:30am Arrival, Registration & Breakfast 10:00am Brand Building through Content and Connection, presented by Michelle Pujadas, Founder and co-CEO of Zer0 to 5ive 10:30am Leveraging Technology for Brand Reach, Consistency and Interaction, presented by Peter Conrad, E2.0 Practice Director at Bluenog 11:15am Q&A 11:30am Adjourn

    Read the article

  • Is it important for reflection-based serialization maintain consistent field ordering?

    - by Matchlighter
    I just finished writing a packet builder that dynamically loads data into a data stream for eventual network transmission. Each builder operates by finding fields in a given class (and its superclasses) that are marked with a @data annotation. When I finishing my implementation, I remembered that getFields() does not return results in any specific order. Should reflection-based methods for serializing arbitrary data (like my packets) attempt to preserve a specific field ordering (such as alphabetical), and if so, how?

    Read the article

  • Creating country specific twitter/facebook accounts

    - by user359650
    I see many companies that have an international presence trying to localize their social media presence by creating country or language specific accounts. However some seemed to have done so without following a consistent pattern, one example being the World Wildlife Fund when you look at their Twitter accounts: World_Wildlife : verified account with 200K followers WWF : main account with 800K followers www_uk : lower case with underscore between WWF and country indicator WWFCanada : upper case with country indicator attached to WWF ... I am planning to build a website which hopefully will grow global and would like to avoid this sort of inconsistencies. Also, I was comparing what Twitter and Facebook allow in their username and found out that they don't allow the same characters to be used (e.g. for instance that the former doesn't allow . whereas the latter does) making difficult to ensure consistency across social networks. Hence my questions: Are there known naming schemes for creating localized Twitter and Facebook accounts while maintaining a certain consistency between them (best effort)? Are there any researches out there that have proven whether some schemes were better than others in terms of readability and/or SEO?

    Read the article

  • Cannot Boot Win XP or Ubuntu from hard drive - get Input Not Supported

    - by Jim Hudspeth
    1) Downloaded 11.10 ISO file to Dell XP Workstation 2) Made bootable USB using Pendrivelinux 3) Installed to harddrive using option 1 (Install along side Windows) 4) Rebooted when instructed 5) Booted into Ubuntu just fine (first time) 6) Attempted restart - got first splash screen followed by "input not supported" - tapped ESC and eventually got into Ubuntu 7) Later attempts failed - got "input not supported"; no eventual boot 8) Many retries holding / tapping various keys - same result 9) Booted from USB - all files appear to be in place - can access GRUB on harddrive Suggestions appreciated - must to be able to boot XP. Thanks in advance

    Read the article

  • Recommendations for stable, reliable flash drives

    - by Josh Kelley
    We're looking to purchase some flash drives for use in some embedded devices. Most of our requirements aren't too different the generic "good, fast" flash drive: reliability is very important, speed is good, and so that the drive will fit, the case shouldn't be too large (so no OCZ Throttles). Consistency is also a major priority; we'd like to be able to buy more or less the same product a year or two from now without having to worry about the manufacturer swapping drive components with less reliable or slower parts. (We've been burned already by our previous manufacturer doing this.) Any recommendations, especially regarding consistency? I can read Ars Technica to get an overview of current models, but what are consistently good models?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >