Search Results

Search found 615 results on 25 pages for 'eventual consistency'.

Page 2/25 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • HD Crash SQL server -> DBCC - consistency errors in table 'sysindexes'

    - by Julian de Wit
    Hello A client of mine has had an HD crash an a SQL DB got corrupt : They did not make backups so they have a big problem. When I tried (an ultimate measure) to DBCC-repair I got the following message. Can anybody help me with this ? Server: Msg 8966, Level 16, State 1, Line 1 Could not read and latch page (1:872) with latch type SH. sysindexes failed. Server: Msg 8944, Level 16, State 1, Line 1 Table error: Object ID 2, index ID 0, page (1:872), row 11. Test (columnOffsets->IsComplex (varColumnNumber) && (ColumnId == COLID_HYDRA_TEXTPTR || ColumnId == COLID_INROW_ROOT || ColumnId == COLID_BACKPTR)) failed. Values are 2 and 5. The repair level on the DBCC statement caused this repair to be bypassed. CHECKTABLE found 0 allocation errors and 1 consistency errors in table 'sysindexes' (object ID 2). DBCC execution completed. If DBCC printed error messages, contact your system administrator.

    Read the article

  • consistency of Trigger Procedure (before row trigger) Postgresql

    - by elgcom
    Using Postgresql. I try to use TRIGGER procedure to make some consistency check on INSERT. The question is ...... whether "BEFORE INSERT FOR EACH ROW" can make sure each row to insert "checked" and "inserted" one after another? do I need extra lock on table to survive from concurrent insert? check for new row1 - insert row1 - check for new row2 - insert row2 -- -- -- unexpired product name is unique. CREATE TABLE product ( "name" VARCHAR(100) NOT NULL, "expired" BOOLEAN NOT NULL ); CREATE OR REPLACE FUNCTION check_consistency() RETURNS TRIGGER AS $$ BEGIN IF EXISTS (SELECT * FROM product WHERE name=NEW.name AND expired='false') THEN RAISE EXCEPTION 'duplicated!!!'; END IF; RETURN NEW; END; $$ LANGUAGE plpgsql; CREATE TRIGGER trigger_check_consistency BEFORE INSERT ON product FOR EACH ROW EXECUTE PROCEDURE check_consistency(); -- INSERT INTO product VALUES("prod1", true); INSERT INTO product VALUES("prod1", false); INSERT INTO product VALUES("prod1", false); // exception! this is OK name | expired ============== p1 | true p1 | true p1 | false This is not OK name | expired ============== p1 | true p1 | false p1 | false or maybe I should ask, how can I use Trigger to implement "Primary" or "Unique" constraint-like SQL.

    Read the article

  • Why do I bother with RAID 10 ?

    - by GrumpyOldDBA
    Before I post anything I just want to clarify what I mean by RAID 10 , this is sets of mirrored pairs that have been striped as against a RAID 0 which has been mirrored. I've just had a disk failure in the data array for one of my dev servers, it's an eight disk raid 8, no real worries, replace disk and off we go - but no - the HP engineers told me from the diagnostics ( done to ensure I got the right replacement under warranty ) that not only had a disk failed but I'd lost all the data...(read more)

    Read the article

  • Differences in documentation for sys.dm_exec_requests

    - by AaronBertrand
    I've already complained about this on Connect ( see #641790 ), but I just wanted to point out that if you're trying to make sense of the sys.dm_exec_requests document and what it lists as the commands supported by the percent_complete column, you should check which version of the documentation you're reading. I noticed the following discrepancies. I can't explain why certain operations are missing, except that the Denali topic was generated from the 2008 topic (or maybe from the 2008 R2 topic before...(read more)

    Read the article

  • Differences in documentation for sys.dm_exec_requests

    - by AaronBertrand
    I've already complained about this on Connect ( see #641790 ), but I just wanted to point out that if you're trying to make sense of the sys.dm_exec_requests document and what it lists as the commands supported by the percent_complete column, you should check which version of the documentation you're reading. I noticed the following discrepancies. I can't explain why certain operations are missing, except that the Denali topic was generated from the 2008 topic (or maybe from the 2008 R2 topic before...(read more)

    Read the article

  • Azure Table Storage data Consistency

    - by SeeR
    Let say I have Table in Azure Table Storage public class MyTable { public string PK {get; set;} public string RowPK {get; set;} public double Amount {get; set;} } And message in Azure Queue which says Add 10 to Amount. Now let say one worker role Takes this message from queue Takes row from table Amount += 10 Updates Row in Table And Fails After a while message is available in queue again. So next worker role: Takes this message from queue Takes row from table Amount += 10 Updates Row in Table Removes message from queue Those actions results in Amount += 20 instead of Amount += 10. How can I avoid such situations?

    Read the article

  • Java LockSupport Memory Consistency

    - by Lachlan
    Java 6 API question. Does calling LockSupport.unpark(thread) have a happens-before relationship to the return from LockSupport.park in the just-unparked thread? I strongly suspect the answer is yes, but the Javadoc doesn't seem to mention it explicitly.

    Read the article

  • Getting timing consistency in Linux

    - by Jim Hunziker
    I can't seem to get a simple program (with lots of memory access) to achieve consistent timing in Linux. I'm using a 2.6 kernel, and the program is being run on a dual-core processor with realtime priority. I'm trying to disable cache effects by declaring the memory arrays as volatile. Below are the results and the program. What are some possible sources of the outliers? Results: Number of trials: 100 Range: 0.021732s to 0.085596s Average Time: 0.058094s Standard Deviation: 0.006944s Extreme Outliers (2 SDs away from mean): 7 Average Time, excluding extreme outliers: 0.059273s Program: #include <stdio.h> #include <stdlib.h> #include <math.h> #include <sched.h> #include <sys/time.h> #define NUM_POINTS 5000000 #define REPS 100 unsigned long long getTimestamp() { unsigned long long usecCount; struct timeval timeVal; gettimeofday(&timeVal, 0); usecCount = timeVal.tv_sec * (unsigned long long) 1000000; usecCount += timeVal.tv_usec; return (usecCount); } double convertTimestampToSecs(unsigned long long timestamp) { return (timestamp / (double) 1000000); } int main(int argc, char* argv[]) { unsigned long long start, stop; double times[REPS]; double sum = 0; double scale, avg, newavg, median; double stddev = 0; double maxval = -1.0, minval = 1000000.0; int i, j, freq, count; int outliers = 0; struct sched_param sparam; sched_getparam(getpid(), &sparam); sparam.sched_priority = sched_get_priority_max(SCHED_FIFO); sched_setscheduler(getpid(), SCHED_FIFO, &sparam); volatile float* data; volatile float* results; data = calloc(NUM_POINTS, sizeof(float)); results = calloc(NUM_POINTS, sizeof(float)); for (i = 0; i < REPS; ++i) { start = getTimestamp(); for (j = 0; j < NUM_POINTS; ++j) { results[j] = data[j]; } stop = getTimestamp(); times[i] = convertTimestampToSecs(stop-start); } free(data); free(results); for (i = 0; i < REPS; i++) { sum += times[i]; if (times[i] > maxval) maxval = times[i]; if (times[i] < minval) minval = times[i]; } avg = sum/REPS; for (i = 0; i < REPS; i++) stddev += (times[i] - avg)*(times[i] - avg); stddev /= REPS; stddev = sqrt(stddev); for (i = 0; i < REPS; i++) { if (times[i] > avg + 2*stddev || times[i] < avg - 2*stddev) { sum -= times[i]; outliers++; } } newavg = sum/(REPS-outliers); printf("Number of trials: %d\n", REPS); printf("Range: %fs to %fs\n", minval, maxval); printf("Average Time: %fs\n", avg); printf("Standard Deviation: %fs\n", stddev); printf("Extreme Outliers (2 SDs away from mean): %d\n", outliers); printf("Average Time, excluding extreme outliers: %fs\n", newavg); return 0; }

    Read the article

  • glibc Heap Consistency Checking

    - by idimba
    According to posts from 2008 (I can't find it right now), glibc heap check doesn't work in multithreaded environment. Is it still situation now in 2010? Does heap check enabled by default? (gcc 4.1.2)? I don't set MALLOC_CHECK_, don't aware of calling mcheck(), but still sometimes receive double free glibc error with backtrace. Maybe it's enabled by some ccompilation flag?

    Read the article

  • Excel string manipulation to check data consistency

    - by chefsmart
    Background information: - There are nearly 7000 individuals and there is data about their performances in one, two or three tests. Every individual has taken the 1st test (let's call it Test M). Some of those who have taken Test M have also taken Test I, and some of those who have taken Test I have also taken Test B. For the first two tests (M and I), students can score grades I, II, or III. Depending on the grades they are awarded points -- 3 for grade I, 2 for II, 1 for III. The last Test B is just a pass or a fail result with no grades. Those passing this test get 1 point, with no points for failure. (Well actually, grades are awarded, but all grades are given a common 1 point). An amateur has entered data to represent these students and their grades in an Excel file. Problem is, this person has done the worst thing possible - he has developed his own notation and entered all test information in a single cell --- and made my life hell. The file originally had two text columns, one for individual's id, and the second for test info, if one could call it that. It's horrible, I know, and I am suffering. In the image, if you see "M-II-2 I-III-1" it means the person got grade II in Test M for 2 points and grade III in Test I for 1 point. Some have taken only one test, some two, and some three. When the file came to me for processing and analyzing the performance of students, I sent it back with instructions to insert 3 additional columns with only the grades for the three tests. The file now looks as follows. Columns C and D represent grades I, II, and III using 1,2 and 3 respectively. Column C is for Test M, column D for Test I. Column E says BA (B Achieved!) if the individual has passed Test B. Now that you have the above information, let's get to the problem. I don't trust this and want to check whether data in column B matches with data in columns C,D and E. That is, I want to examine the string in column B and find out whether the figures in columns C,D and E are correct. All help is really appreciated. P.S. - I had exported this to MySQL via ODBC and that is why you are seeing those NULLs. I tried doing this in MySQL too, and really will accept a MySQL or an Excel solution, I don't have a preference. Edit : - See file with sample data

    Read the article

  • Rtti data manipulation and consistency in Delphi 2010

    - by Coco
    Has anyone an idea, how I can make TValue using a reference to the original data? In my serialization project, I use (as suggested in XML-Serialization) a generic serializer which stores TValues in an internal tree-structure (similar to the MemberMap in the example). This member-tree should also be used to create a dynamic setup form and manipulate the data. My idea was to define a property for the Data: TDataModel <T> = class {...} private FData : TValue; function GetData : T; procedure SetData (Value : T); public property Data : T read GetData write SetData; end; The implementation of the GetData, SetData Methods: procedure TDataModel <T>.SetData (Value : T); begin FData := TValue.From <T> (Value); end; procedure TDataModel <T>.GetData : T; begin Result := FData.AsType <T>; end; Unfortunately, the TValue.From method always makes a copy of the original data. So whenever the application makes changes to the data, the DataModel is not updated and vice versa if I change my DataModel in a dynamic form, the original data is not affected. Sure I could always use the Data property before and after changing anything, but as I use lot of Rtti inside my DataModel, I do not realy want to do this anytime. Perhaps someone has a better suggestion?

    Read the article

  • Alternatives to static methods on interfaces for enforcing consistency

    - by jayshao
    In Java, I'd like to be able to define marker interfaces, that forced implementations to provide static methods. For example, for simple text-serialization/deserialization I'd like to be able to define an interface that looked something like this: public interface TextTransformable<T>{ public static T fromText(String text); public String toText(); Since interfaces in Java can't contain static methods though (as noted in a number of other posts/threads: here, here, and here this code doesn't work. What I'm looking for however is some reasonable paradigm to express the same intent, namely symmetric methods, one of which is static, and enforced by the compiler. Right now the best we can come up with is some kind of static factory object or generic factory, neither of which is really satisfactory. Note: in our case our primary use-case is we have many, many "value-object" types - enums, or other objects that have a limited number of values, typically carry no state beyond their value, and which we parse/de-parse thousands of time a second, so actually do care about reusing instances (like Float, Integer, etc.) and its impact on memory consumption/g.c. Any thoughts?

    Read the article

  • Query to check the consistency of records

    - by orunner
    I have four tables TableA: id1 id2 id3 value TableB: id1 desc TableC: id2 desc TableD: id3 desc What I need to do is to check if all combinations of id1 id2 id3 from table B C and D exist in the TableA. In other words, table A should contain all possible combinations of id1 id2 and id3 which are stored in the other three tables.

    Read the article

  • Django model data consistency

    - by Mark
    When creating a form, you can define a bunch of methods, clean_xyz, to make sure the data gets forced into the correct format. Is there any way to do this on a model level? Perhaps I can override the field setters somehow? I want it so that if I write something like my_address.postal_code = 'a1b2c3' It will automatically get formatted into A1B 2C3. Perhaps throw an exception if it can't be converted. That way I know I'll never have any malformed data in the database.

    Read the article

  • Guaranteeing ACID properties for InnoDB databases.

    - by plinehan
    What steps must one take to ensure that an otherwise defaultly-configured InnoDB server is truly ACID compliant? The InnoDB configuration page mentions that the hardware itself must be configured to honor fsync calls, i.e. disable any write-back caches. This page mentions some other concerns, but may be conflating the binary log and the InnoDB log, and may be a bit out of date regarding default settings for MySQL 5.x. Upon reading the binary log document page it would seem that the "sync_binlog=1" setting is not required for ACID properties in general, only for ACID properties vis a vis point-in-time recovery and replication. So, is disabling write-back disk caching sufficient, or are there other settings that must be tweaked?

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • Why does yum index get corrupted?

    - by TomOnTime
    Occasionally yum's cache gets corrupted and we see errors like this: error: db3 error(-30974) from dbenv->failchk: DB_RUNRECOVERY: Fatal error, run database recovery error: cannot open Packages index using db3 - (-30974) error: cannot open Packages database in /var/lib/rpm The workaround is rm -f /var/lib/rpm/__db* and then the next "yum" command regenerates the data. My question is: what is likely to be causing this? Is there some common task that ignores locks or has other problem that causes this? We have hundreds of CentOS machines and there is no pattern to which see this problem. It could be a "one in a million" issue, which at large scale is seen often. NOTE: I realize this is a very "open ended" question, but if an answer finds the cause, I will go back and turn the question into something more canonical that directly relates to the specific issue.

    Read the article

  • MSSQL error: consistency-based I/O error - can it be caused by an MSSQL or OS problem?

    - by Philipp Keller
    This is what I saw in the windows error log: SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0x19fedd20; actual: 0x19fed5e3). It occurred during a read of page (1:1764) in database ID 6 at offset 0x00000000dc8000 in file 'D:\mssql\local_repository_pbdiffimport.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online. I ran dbcc checkdb which told me I should restore with option REPAIR_ALLOW_DATA_LOSS, so I eventually ran DBCC CHECKDB (my_db_name, REPAIR_ALLOW_DATA_LOSS) WITH NO_INFOMSGS But that resulted in about 2'000 rows being lost. I restored a backup but now I'm afraid this will happen again since we already had a consistency problem in the same database about 2 weeks ago but then it happened in an index (recreated indexes solved the problem). We have investigated the discs - RAID5 looks good, no errors, and also none of the disc-check-utilities have revealed any hardware problem. Can this be caused by OS (Windows Server 2003) or by MSSQL (MSSQL Server 2005)?

    Read the article

  • How to Treat Race Condition of Session in Web Application?

    - by Morgan Cheng
    I was in a ASP.NET application has heavy traffic of AJAX requests. Once a user login our web application, a session is created to store information of this user's state. Currently, our solution to keep session data consistent is quite simple and brutal: each request needs to acquire a exclusive lock before being processed. This works fine for tradition web application. But, when the web application turns to support AJAX, it turns to not efficient. It is quite possible that multiple AJAX requests are sent to server at the same time without reloading the web page. If all AJAX requests are serialized by the exclusive lock, the response is not so quick. Anyway, many AJAX requests that doesn't access same session variables are blocked as well. If we don't have a exclusive lock for each requests, then we need to treat all race condition carefully to avoid dead lock. I'm afraid that would make the code complex and buggy. So, is there any best practice to keep session data consistent and keep code simple and clean?

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • Should we encourage coding styles in favor of developer's autonomy, or discourage it in favor of consistency?

    - by Saeed Neamati
    A developer writes if/else blocks with one-line code statements like: if (condition) // Do this one-line code else // Do this one-line code Another uses curly braces for all of them: if (condition) { // Do this one-line code } else { // Do this one-line code } A developer first instantiates an object, then uses it: HelperClass helper = new HelperClass(); helper.DoSomething(); Another developer instantiates and uses the object in one line: new HelperClass().DoSomething(); A developer is more easy with arrays, and for loops: string[] ordinals = new string[] {'First', 'Second', 'Third'}; for (i = 0; i < ordinals.Length; i++) { // Do something } Another writes: List<string> ordinals = new List<string>() {'First', 'Second', 'Third'}; foreach (string ordinal in ordinals) { // Do something } I'm sure that you know what I'm talking about. I call it coding style (cause I don't know what it's called). But whatever we call it, is it good or bad? Does encouraging it have an effect of higher productivity of developers? Should we ask developers to try to write code the way we tell them, so to make the whole system become style-consistent?

    Read the article

  • What can I do to get Mozilla Firefox to preload the eventual image result?

    - by Dalal
    I am attempting to preload images using JavaScript. I have declared an array as follows with image links from different places: var imageArray = new Array(); imageArray[0] = new Image(); imageArray[1] = new Image(); imageArray[2] = new Image(); imageArray[3] = new Image(); imageArray[0].src = "http://www.bollywoodhott.com/wp-content/uploads/2008/12/arjun-rampal.jpg"; imageArray[1].src = "http://labelleetleblog.files.wordpress.com/2009/06/josie-maran.jpg"; imageArray[2].src = "http://1.bp.blogspot.com/_22EXDJCJp3s/SxbIcZHTHTI/AAAAAAAAIXc/fkaDiOKjd-I/s400/black-male-model.jpg"; imageArray[3].src = "http://www.iill.net/wp-content/uploads/images/hot-chick.jpg"; The image fade and transformation effects that I am doing using this array work properly for the first 3 images, but for the last one, imageArray[3], the actual image data of the image does not get preloaded and it completely ruins the effect, since the actual image data loads AFTERWARDS, only at the time it needs to be displayed, it seems. This happens because the last link http://www.iill.net/wp-content/uploads/images/hot-chick.jpg is not a direct link to the image. If you go to that link, your browser will redirect you to the ACTUAL location. Now, my image preloading code in Chrome works perfectly well, and the effects look great. Because it seems that Chrome preloads the actual data - the EVENTUAL image that is to be shown. This means that in Chrome if I preloaded an image that will redirect to 'stop stealing my bandwidth', then the image that gets preloaded is 'stop stealing my bandwidth'. How can I modify my code to get Firefox to behave the same way?

    Read the article

  • Big Data – Operational Databases Supporting Big Data – RDBMS and NoSQL – Day 12 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Cloud in the Big Data Story. In this article we will understand the role of Operational Databases Supporting Big Data Story. Even though we keep on talking about Big Data architecture, it is extremely crucial to understand that Big Data system can’t just exist in the isolation of itself. There are many needs of the business can only be fully filled with the help of the operational databases. Just having a system which can analysis big data may not solve every single data problem. Real World Example Think about this way, you are using Facebook and you have just updated your information about the current relationship status. In the next few seconds the same information is also reflected in the timeline of your partner as well as a few of the immediate friends. After a while you will notice that the same information is now also available to your remote friends. Later on when someone searches for all the relationship changes with their friends your change of the relationship will also show up in the same list. Now here is the question – do you think Big Data architecture is doing every single of these changes? Do you think that the immediate reflection of your relationship changes with your family member is also because of the technology used in Big Data. Actually the answer is Facebook uses MySQL to do various updates in the timeline as well as various events we do on their homepage. It is really difficult to part from the operational databases in any real world business. Now we will see a few of the examples of the operational databases. Relational Databases (This blog post) NoSQL Databases (This blog post) Key-Value Pair Databases (Tomorrow’s post) Document Databases (Tomorrow’s post) Columnar Databases (The Day After’s post) Graph Databases (The Day After’s post) Spatial Databases (The Day After’s post) Relational Databases We have earlier discussed about the RDBMS role in the Big Data’s story in detail so we will not cover it extensively over here. Relational Database is pretty much everywhere in most of the businesses which are here for many years. The importance and existence of the relational database are always going to be there as long as there are meaningful structured data around. There are many different kinds of relational databases for example Oracle, SQL Server, MySQL and many others. If you are looking for Open Source and widely accepted database, I suggest to try MySQL as that has been very popular in the last few years. I also suggest you to try out PostgreSQL as well. Besides many other essential qualities PostgreeSQL have very interesting licensing policies. PostgreSQL licenses allow modifications and distribution of the application in open or closed (source) form. One can make any modifications and can keep it private as well as well contribute to the community. I believe this one quality makes it much more interesting to use as well it will play very important role in future. Nonrelational Databases (NOSQL) We have also covered Nonrelational Dabases in earlier blog posts. NoSQL actually stands for Not Only SQL Databases. There are plenty of NoSQL databases out in the market and selecting the right one is always very challenging. Here are few of the properties which are very essential to consider when selecting the right NoSQL database for operational purpose. Data and Query Model Persistence of Data and Design Eventual Consistency Scalability Though above all of the properties are interesting to have in any NoSQL database but the one which most attracts to me is Eventual Consistency. Eventual Consistency RDBMS uses ACID (Atomicity, Consistency, Isolation, Durability) as a key mechanism for ensuring the data consistency, whereas NonRelational DBMS uses BASE for the same purpose. Base stands for Basically Available, Soft state and Eventual consistencyEventual consistency is widely deployed in distributed systems. It is a consistency model used in distributed computing which expects unexpected often. In large distributed system, there are always various nodes joining and various nodes being removed as they are often using commodity servers. This happens either intentionally or accidentally. Even though one or more nodes are down, it is expected that entire system still functions normally. Applications should be able to do various updates as well as retrieval of the data successfully without any issue. Additionally, this also means that system is expected to return the same updated data anytime from all the functioning nodes. Irrespective of when any node is joining the system, if it is marked to hold some data it should contain the same updated data eventually. As per Wikipedia - Eventual consistency is a consistency model used in distributed computing that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. In other words -  Informally, if no additional updates are made to a given data item, all reads to that item will eventually return the same value. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >