Search Results

Search found 1047 results on 42 pages for 'locking'.

Page 1/42 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Windows Server 08 R2 file share File locking, OSX clients

    - by Keith Loughnane
    I've spent the last two weeks banging my head against this wall. I think I'm starting to understand the problem though. I manage a design company and they have 5 macs (OSX 10.5/.6/.7) connected over SMB to a Windows 2008 R2 file server, another machine functions as Domain Controller (that might not matter). All the macs can connect ok, no issues finding the server or logging in. For the most part things are ok. The problem is files locking up. I thought it was a permissions issue at first but it seems to be file locking. The users open a file; .ind, .pdf etc the file opens, the software reads it and closes it. That's fine, but the folder above the folder locks, it can't be moved and it can't be renamed. Eg: /Working/Project01/Imagefiles/image.pdf /Finished/ The user opens image.pdf, closes it and wants to move the whole Project01 folder into Finished. It gives a username/pass dialogue and then does nothing, no error, or just does nothing. Trying to rename gives a dialogue that says you don't have permission. It looks like it's looking for permission locally, which is why I spent about a week looking at that. Eventually I found that Finder on the macs seems to be keeping the folders open. I can work around it by Killing finder, remounting the shared drive or closing the file through the server manager but this just proves the theory it's not a solution. Has anyone dealt with this problem?

    Read the article

  • Locking Cache Key without Locking the entire Cache

    - by Gandalf
    I have servlets that caches user information rather then retrieving it from the user store on every request (shared Ehcache). The issue I have is that if a client is multi-threaded and they make more then one simultaneous request, before they have been authenticated, then I get this in my log: Retrieving User [Bob] Retrieving User [Bob] Retrieving User [Bob] Returned [Bob] ...caching Returned [Bob] ...caching Returned [Bob] ...caching What I would want is that the first request would call the user service, while the other two requests get blocked - and when the first request returns, and then caches the object, the other two requests go through: Retrieving User [Bob] blocking... blocking... Returned [Bob] ...caching [Bob] found in cache [Bob] found in cache I've thought about locking on the String "Bob" (because due to interning it's always the same object right?). Would that work? And if so how do I keep track of the keys that actually exist in the cache and build a locking mechanism around them that would then return the valid object once it's retrieved. Thanks.

    Read the article

  • Dummies guide to locking in innodb

    - by ming yeow
    The typical documentation on locking in innodb is way too confusing. I think it will be of great value to have a "dummies guide to innodb locking" I will start, and I will gather all responses as a wiki: The column needs to be indexed before row level locking applies. EXAMPLE: delete row where column1=10; will lock up the table unless column1 is indexed

    Read the article

  • Singleton pattern and broken double checked locking in real world java application

    - by saugata
    I was reading the article Double-checked locking and the Singleton pattern, on how double checked locking is broken, and some related questions here on stackoverflow. I have used this pattern/idiom several times without any issues. Since I have been using Java 5, my first thought was that this has been rectified in Java 5 memory model. However the article says This article refers to the Java Memory Model before it was revised for Java 5.0; statements about memory ordering may no longer be correct. However, the double-checked locking idiom is still broken under the new memory model. I'm wondering if anyone has actually run into this problem in any application and under what conditions.

    Read the article

  • Column locking in innodb?

    - by mingyeow
    I know this sounds weird, but apparently one of my columns is locked. select * from table where type_id = 1 and updated_at < '2010-03-14' limit 1; select * from table where type_id = 3 and updated_at < '2010-03-14' limit 10; the first one would not finish running, while the second one completes smoothly. the only difference is the type_id Thanks in advance for your help - i have an urgent data job to finish, and this problem is driving me crazy

    Read the article

  • Castle ActiveRecord optimistic locking on properties

    - by Daniel T.
    Can Castle ActiveRecord do optimistic locking on properties? I found optimistic locking for the entire class, but not for an individual property. In my case, I need to make it so that adding/removing elements in a collection does not update the version number of the entity (so for example, adding a Product to a Store without changing any of Store's properties will not increment the version number).

    Read the article

  • Database locking: ActiveRecord + Heroku

    - by JP
    I'm building a Sinatra based app for deployment on Heroku. You can imagine it like a standard URL shortener but where old shortcodes expire and become available for new URLs (I realise this is a silly concept but its easier to explain this way). I'm representing the shortcode in my database as an integer and redefining its reader to give a nice short and unique string from the integer. As some rows will be deleted, I've written code that goes thru all the shortcode integers and picks the first free one to use just before_save. Unfortunately I can make my code create two rows with identical shortcode integers if I run two instances very quickly one after another, which is obviously no good! How should I implement a locking system so that I can quickly save my record with a unique shortcode integer? Here's what I have so far: Chars = ('a'..'z').to_a + ('A'..'Z').to_a + ('0'..'9').to_a CharLength = Chars.length class Shorts < ActiveRecord::Base before_save :gen_shortcode after_save :done_shortcode def shortcode i = read_attribute(:shortcode).to_i return '0' if i == 0 s = '' while i > 0 s << Chars[i.modulo(CharLength)] i /= 62 end s end private def gen_shortcode shortcode = 0 self.class.find(:all,:order=>"shortcode ASC").each do |s| if s.read_attribute(:shortcode).to_i != shortcode # Begin locking? break end shortcode += 1 end write_attribute(:shortcode,shortcode) end def done_shortcode # End Locking? end end

    Read the article

  • How to improve INSERT INTO ... SELECT locking behavior

    - by Artem
    In our production database, we ran the following pseudo-code SQL batch query running every hour: INSERT INTO TemporaryTable (SELECT FROM HighlyContentiousTableInInnoDb WHERE allKindsOfComplexConditions are true) Now this query itself does not need to be fast, but I noticed it was locking up HighlyContentiousTableInInnoDb, even though it was just reading from it. Which was making some other very simple queries take ~25 seconds (that's how long that other query takes). Then I discovered that InnoDB tables in such a case are actually locked by a SELECT! http://www.mysqlperformanceblog.com/2006/07/12/insert-into-select-performance-with-innodb-tables/ But I don't really like the solution in the article of selecting into an OUTFILE, it seems like a hack (temporary files on filesystem seem sucky). Any other ideas? Is there a way to make a full copy of an InnoDB table without locking it in this way during the copy. Then I could just copy the HighlyContentiousTable to another table and do the query there.

    Read the article

  • What interprocess locking calls should I monitor?

    - by Matt Joiner
    I'm monitoring a process with strace/ltrace in the hope to find and intercept a call that checks, and potentially activates some kind of globally shared lock. While I've dealt with and read about several forms of interprocess locking on Linux before, I'm drawing a blank on what to calls to look for. Currently my only suspect is futex() which comes up very early on in the process' execution. Update0 There is some confusion about what I'm after. I'm monitoring an existing process for calls to persistent interprocess memory or equivalent. I'd like to know what system and library calls to look for. I have no intention call these myself, so naturally futex() will come up, I'm sure many libraries will implement their locking calls in terms of this, etc. Update1 I'd like a list of function names or a link to documentation, that I should monitor at the ltrace and strace levels (and specifying which). Any other good advice about how to track and locate the global lock in mind would be great.

    Read the article

  • Does my Dictionary must use locking mechanism?

    - by theateist
    Many threads have access to summary. Each thread will have an unique key for accessing the dictionary; Dictionary<string, List<Result>> summary; Do I need locking for following operations? summary[key] = new List<Result>() summary[key].Add(new Result()); It seems that I don't need locking because each thread will access dictionary with different key, but won't the (1) be problematic because of adding concurrently new record to dictionary with other treads?

    Read the article

  • Naming Convention for Dedicated Thread Locking objects

    - by Chris Sinclair
    A relatively minor question, but I haven't been able to find official documentation or even blog opinion/discussions on it. Simply put: when I have a private object whose sole purpose is to serve for private lock, what do I name that object? class MyClass { private object LockingObject = new object(); void DoSomething() { lock(LockingObject) { //do something } } } What should we name LockingObject here? Also consider not just the name of the variable but how it looks in-code when locking. I've seen various examples, but seemingly no solid go-to advice: Plenty of usages of SyncRoot (and variations such as _syncRoot). Code Sample: lock(SyncRoot), lock(_syncRoot) This appears to be influenced by VB's equivalent SyncLock statement, the SyncRoot property that exists on some of the ICollection classes and part of some kind of SyncRoot design pattern (which arguably is a bad idea) Being in a C# context, not sure if I'd want to have a VBish naming. Even worse, in VB naming the variable the same as the keyword. Not sure if this would be a source of confusion or not. thisLock and lockThis from the MSDN articles: C# lock Statement, VB SyncLock Statement Code Sample: lock(thisLock), lock(lockThis) Not sure if these were named minimally purely for the example or not Kind of weird if we're using this within a static class/method. Several usages of PadLock (of varying casing) Code Sample: lock(PadLock), lock(padlock) Not bad, but my only beef is it unsurprisingly invokes the image of a physical "padlock" which I tend to not associate with the abstract threading concept. Naming the lock based on what it's intending to lock Code Sample: lock(messagesLock), lock(DictionaryLock), lock(commandQueueLock) In the VB SyncRoot MSDN page example, it has a simpleMessageList example with a private messagesLock object I don't think it's a good idea to name the lock against the type you're locking around ("DictionaryLock") as that's an implementation detail that may change. I prefer naming around the concept/object you're locking ("messagesLock" or "commandQueueLock") Interestingly, I very rarely see this naming convention for locking objects in code samples online or on StackOverflow. Question: What's your opinion generally about naming private locking objects? Recently, I've started naming them ThreadLock (so kinda like option 3), but I'm finding myself questioning that name. I'm frequently using this locking pattern (in the code sample provided above) throughout my applications so I thought it might make sense to get a more professional opinion/discussion about a solid naming convention for them. Thanks!

    Read the article

  • mysql row locking via php

    - by deezee
    I am helping a friend with a web based form that is for their business. I am trying to get it ready to handle multiple users. I have set it up so that just before the record is displayed for editing I am locking the record with the following code. $query = "START TRANSACTION;"; mysql_query($query); $query = "SELECT field FROM table WHERE ID = \"$value\" FOR UPDATE;"; mysql_query($query); (okay that is greatly simplified but that is the essence of the mysql) It does not appear to be working. However, when I go directly to mysql from the command line, logging in with the same user and execute START TRANSACTION; SELECT field FROM table WHERE ID = "40" FOR UPDATE; I can effectively block the web form from accessing record "40" and get the timeout warning. I have tried using BEGIN instead of START TRANSACTION. I have tried doing SET AUTOCOMMIT=0 first and starting the transaction after locking but I cannot seem to lock the row from the PHP code. Since I can lock the row from the command line I do not think there is a problem with how the database is set up. I am really hoping that there is some simple something that I have missed in my reading. FYI, I am developing on XAMPP version 1.7.3 which has Apache 2.2.14, MySQL 5.1.41 and PHP 5.3.1. Thanks in advance. This is my first time posting but I have gleaned alot of knowledge from this site in the past.

    Read the article

  • Is this a dangerous locking pattern?

    - by Martin
    I have an enumerator written in C#, which looks something like this: try { ReadWriteLock.EnterReadLock(); yield foo; yield bar; yield bash; } finally { ReadWriteLock.ExitReadLock(); } I believe this may be a dangerous locking pattern, as the ReadWriteLock will only be released if the enumeration is complete, otherwise the lock is left hanging and is never released, am I correct? If so, what's the best way to combat this?

    Read the article

  • Android pattern locking

    - by JonF
    I have been able to lock the screen using LOCK_PATTERN_ENABLED, however in many cases when I enable this pattern locking it will take any pattern and move past the pattern check. Has anyone else run across this?

    Read the article

  • Java Concurrency: CAS vs Locking

    - by Hugo Walker
    Im currently reading the Book Java Concurrency in Practice. In the Chapter 15 they are speaking about the Nonblocking algorithms and the compare-and-swap (CAS) Method. It is written that the CAS perform much better than the Locking Methods. I want to ask the people which already worked with both of this concepts and would like to hear when you are preferring which of these concept? Is it really so much faster? Personally for me the usage of Locks is much clearer and easier to understand and maybe even better to maintain. (Please correct me if I am wrong). Should we really focus creating our concurrent code related on CAS than Locks to get a better performance boost or is sustainability a higher thing? I know there is maybe not a strict rule, when to use what. But I just would like to hear some opinions, experiences with the new concept of CAS.

    Read the article

  • double checked locking - objective c

    - by bandejapaisa
    I realised double checked locking is flawed in java due to the memory model, but that is usually associated with the singleton pattern and optimizing the creation of the singleton. What about under this case in objective-c: I have a boolean flag to determine if my application is streaming data or not. I have 3 methods, startStreaming, stopStreaming, streamingDataReceived and i protect them from multiple threads using: - (void) streamingDataReceived:(StreamingData *)streamingData { if (self.isStreaming) { @synchronized(self) { if (self.isStreaming) { - (void) stopStreaming { if (self.isStreaming) { @synchronized(self) { if (self.isStreaming) { - (void) startStreaming:(NSArray *)watchlistInstrumentData { if (!self.isStreaming) { @synchronized(self) { if (!self.isStreaming) { Is this double check uneccessary? Does the double check have similar problems in objective-c as in java? What are the alternatives to this pattern (anti-pattern). Thanks

    Read the article

  • How to avoid SQLiteException locking errors

    - by TheArchedOne
    I'm developing an android app. It has multiple threads reading from and writing to the android SQLite db. I am receiving the following error: SQLiteException: error code 5: database is locked I understand the SQLite locks the entire db on inserting/updating, but these errors only seems to happen when inserting/updating while I'm running a select query. The select query returns a cursor which is being left open quite a wile (a few seconds some times) while I iterate over it. If the select query is not running, I never get the locks. I'm surprised that the select could be locking the db.... is this possible, or is something else going on? What's the best way to avoid such locks? Thanks TAO

    Read the article

  • optimistic locking batch update

    - by Priit
    How to use optimistic locking with batch updates? I am using SimpleJdbcTemplate and for single row I can build update sql that increments version column value and includes version in WHERE clause. Unfortunately te result int[] updated = simpleJdbcTemplate.batchUpdate does not contain rowcounts when using oracle driver. All elements are -2 indicating unknown rowcount. Is there some other, more performant way of doing this than executing all updates individually? These batches contain an average of 5 items (only) but may be up to 250.

    Read the article

  • Guidelines of when to use locking

    - by miguel
    I would like to know if there are any guidelineswhich a developer should follow as to when (and where) to place locks. For instance: I understand that code such as this should be locked, to avoid the possibility of another thread changing the value of SomeHeapValue unexpectedly. class Foo { public SomeHeapObject myObject; public void DoSummat(object inputValue_) { myObject.SomeHeapValue = inputValue_; } } My question is, however, how deep does one go with the locking? For instance, if we have this code: class Foo { public SomeHeapObject myObject; public void DoSummat(object inputValue_) { myObject.SomeHeapValue = GetSomeHeapValue(); } } Should we lock in the DoSummat(...) method, or should we lock in the GetSomeHeapValue() method? Are there any guidelines that you all keep in mind when strcturing multi-threaded code?

    Read the article

  • C# Working with Locking and Threads

    - by aherrick
    Work on this small test application to learn threading/locking. I have the following code, I would think that the line should only write to console once. However it doesn't seem to be working as expected. Any thoughts on why? What I'm trying to do is add this Lot object to a List, then if any other threads try and hit that list, it would block. Am i completely misusing lock here? class Program { static void Main(string[] args) { int threadCount = 10; //spin up x number of test threads Thread[] threads = new Thread[threadCount]; Work w = new Work(); for (int i = 0; i < threadCount; i++) { threads[i] = new Thread(new ThreadStart(w.DoWork)); } for (int i = 0; i < threadCount; i++) { threads[i].Start(); } // don't let the console close Console.ReadLine(); } } public class Work { List<Lot> lots = new List<Lot>(); private static readonly object thisLock = new object(); public void DoWork() { Lot lot = new Lot() { LotID = 1, LotNumber = "100" }; LockLot(lot); } private void LockLot(Lot lot) { // i would think that "Lot has been added" should only print once? lock (thisLock) { if(!lots.Contains(lot)) { lots.Add(lot); Console.WriteLine("Lot has been added"); } } } }

    Read the article

  • .NET Working with Locking and Threads

    - by aherrick
    Work on this small test application to learn threading/locking. I have the following code, I would think that the line should only write to console once. However it doesn't seem to be working as expected. Any thoughts on why? What I'm trying to do is add this Lot object to a List, then if any other threads try and hit that list, it would block. Am i completely misusing lock here? class Program { static void Main(string[] args) { int threadCount = 10; //spin up x number of test threads Thread[] threads = new Thread[threadCount]; Work w = new Work(); for (int i = 0; i < threadCount; i++) { threads[i] = new Thread(new ThreadStart(w.DoWork)); } for (int i = 0; i < threadCount; i++) { threads[i].Start(); } // don't let the console close Console.ReadLine(); } } public class Work { List<Lot> lots = new List<Lot>(); private static readonly object thisLock = new object(); public void DoWork() { Lot lot = new Lot() { LotID = 1, LotNumber = "100" }; LockLot(lot); } private void LockLot(Lot lot) { // i would think that "Lot has been added" should only print once? lock (thisLock) { if(!lots.Contains(lot)) { lots.Add(lot); Console.WriteLine("Lot has been added"); } } } }

    Read the article

  • Python Locking Implementation (with threading module)

    - by Matty
    This is probably a rudimentary question, but I'm new to threaded programming in Python and am not entirely sure what the correct practice is. Should I be creating a single lock object (either globally or being passed around) and using that everywhere that I need to do locking? Or, should I be creating multiple lock instances in each of the classes where I will be employing them. Take these 2 rudimentary code samples, which direction is best to go? The main difference being that a single lock instance is used in both class A and B in the second, while multiple instances are used in the first. Sample 1 class A(): def __init__(self, theList): self.theList = theList self.lock = threading.Lock() def poll(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.append(something) finally: self.lock.release() class B(threading.Thread): def __init__(self,theList): self.theList = theList self.lock = threading.Lock() self.start() def run(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.remove(something) finally: self.lock.release() if __name__ == "__main__": aList = [] for x in range(10): B(aList) A(aList).poll() Sample 2 class A(): def __init__(self, theList,lock): self.theList = theList self.lock = lock def poll(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.append(something) finally: self.lock.release() class B(threading.Thread): def __init__(self,theList,lock): self.theList = theList self.lock = lock self.start() def run(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.remove(something) finally: self.lock.release() if __name__ == "__main__": lock = threading.Lock() aList = [] for x in range(10): B(aList,lock) A(aList,lock).poll()

    Read the article

  • Utility that helps in file locking - expert tips wanted

    - by maix
    I've written a subclass of file that a) provides methods to conveniently lock it (using fcntl, so it only supports unix, which is however OK for me atm) and b) when reading or writing asserts that the file is appropriately locked. Now I'm not an expert at such stuff (I've just read one paper [de] about it) and would appreciate some feedback: Is it secure, are there race conditions, are there other things that could be done better … Here is the code: from fcntl import flock, LOCK_EX, LOCK_SH, LOCK_UN, LOCK_NB class LockedFile(file): """ A wrapper around `file` providing locking. Requires a shared lock to read and a exclusive lock to write. Main differences: * Additional methods: lock_ex, lock_sh, unlock * Refuse to read when not locked, refuse to write when not locked exclusivly. * mode cannot be `w` since then the file would be truncated before it could be locked. You have to lock the file yourself, it won't be done for you implicitly. Only you know what lock you need. Example usage:: def get_config(): f = LockedFile(CONFIG_FILENAME, 'r') f.lock_sh() config = parse_ini(f.read()) f.close() def set_config(key, value): f = LockedFile(CONFIG_FILENAME, 'r+') f.lock_ex() config = parse_ini(f.read()) config[key] = value f.truncate() f.write(make_ini(config)) f.close() """ def __init__(self, name, mode='r', *args, **kwargs): if 'w' in mode: raise ValueError('Cannot open file in `w` mode') super(LockedFile, self).__init__(name, mode, *args, **kwargs) self.locked = None def lock_sh(self, **kwargs): """ Acquire a shared lock on the file. If the file is already locked exclusively, do nothing. :returns: Lock status from before the call (one of 'sh', 'ex', None). :param nonblocking: Don't wait for the lock to be available. """ if self.locked == 'ex': return # would implicitly remove the exclusive lock return self._lock(LOCK_SH, **kwargs) def lock_ex(self, **kwargs): """ Acquire an exclusive lock on the file. :returns: Lock status from before the call (one of 'sh', 'ex', None). :param nonblocking: Don't wait for the lock to be available. """ return self._lock(LOCK_EX, **kwargs) def unlock(self): """ Release all locks on the file. Flushes if there was an exclusive lock. :returns: Lock status from before the call (one of 'sh', 'ex', None). """ if self.locked == 'ex': self.flush() return self._lock(LOCK_UN) def _lock(self, mode, nonblocking=False): flock(self, mode | bool(nonblocking) * LOCK_NB) before = self.locked self.locked = {LOCK_SH: 'sh', LOCK_EX: 'ex', LOCK_UN: None}[mode] return before def _assert_read_lock(self): assert self.locked, "File is not locked" def _assert_write_lock(self): assert self.locked == 'ex', "File is not locked exclusively" def read(self, *args): self._assert_read_lock() return super(LockedFile, self).read(*args) def readline(self, *args): self._assert_read_lock() return super(LockedFile, self).readline(*args) def readlines(self, *args): self._assert_read_lock() return super(LockedFile, self).readlines(*args) def xreadlines(self, *args): self._assert_read_lock() return super(LockedFile, self).xreadlines(*args) def __iter__(self): self._assert_read_lock() return super(LockedFile, self).__iter__() def next(self): self._assert_read_lock() return super(LockedFile, self).next() def write(self, *args): self._assert_write_lock() return super(LockedFile, self).write(*args) def writelines(self, *args): self._assert_write_lock() return super(LockedFile, self).writelines(*args) def flush(self): self._assert_write_lock() return super(LockedFile, self).flush() def truncate(self, *args): self._assert_write_lock() return super(LockedFile, self).truncate(*args) def close(self): self.unlock() return super(LockedFile, self).close() (the example in the docstring is also my current use case for this) Thanks for having read until down here, and possibly even answering :)

    Read the article

  • Thread locking issue with FileHelpers between calling engine.ReadNext() method and readign engine.Li

    - by Rad
    I use producer/consumer pattern with FileHelpers library to import data from one file (which can be huge) using multiple threads. Each thread is supposed to import a chunk of that file and I would like to use LineNumber property of the FileHelperAsyncEngine instance that is reading the file as primary key for imported rows. FileHelperAsyncEngine internally has an IEnumerator IEnumerable.GetEnumerator(); which is iterated over using engine.ReadNext() method. That internally sets LineNumber property (which seems is not thread safe). Consumers will have Producers assiciated with them that will supply DataTables to Consumers which will consume them via SqlBulkLoad class which will use IDataReader implementation which will iterate over a collection of DataTables which are internal to a Consumer instance. Each instance of will have one SqlBulkCopy instance associate with it. I have thread locking issue. Below is how I create multiple Producer threads. I start each thread afterwords. Produce method on a producer instance will be called determining which chunk of input file will be processed. It seems that engine.LineNumber is not thread safe and I doesn't import a proper LineNumber in the database. It seems that by the time engine.LineNumber is read some other thread called engine.ReadNext() and changed engine.LineNumber property. I don't want to lock the loop that is supposed to process a chunk of input file because I loose parallelism. How to reorganize the code to solve this threading issue? Thanks Rad for (int i = 0; i < numberOfProducerThreads; i++) DataConsumer consumer = dataConsumers[i]; //create a new producer DataProducer producer = new DataProducer(); //consumer has already being created consumer.Subscribe(producer); FileHelperAsyncEngine orderDetailEngine = new FileHelperAsyncEngine(recordType); orderDetailEngine.Options.RecordCondition.Condition = RecordCondition.ExcludeIfBegins; orderDetailEngine.Options.RecordCondition.Selector = STR_ORDR; int skipLines = i * numberOfBufferTablesToProcess * DataBuffer.MaxBufferRowCount; Thread newThread = new Thread(() => { producer.Produce(consumer, inputFilePath, lineNumberFieldName, dict, orderDetailEngine, skipLines, numberOfBufferTablesToProcess); consumer.SetEndOfData(producer); }); producerThreads.Add(newThread); thread.Start();} public void Produce(DataConsumer consumer, string inputFilePath, string lineNumberFieldName, Dictionary<string, object> dict, FileHelperAsyncEngine engine, int skipLines, int numberOfBufferTablesToProcess) { lock (this) { engine.Options.IgnoreFirstLines = skipLines; engine.BeginReadFile(inputFilePath); } int rowCount = 1; DataTable buffer = consumer.BufferDataTable; while (engine.ReadNext() != null) { lock (this) { dict[lineNumberFieldName] = engine.LineNumber; buffer.Rows.Add(ObjectFieldsDataRowMapper.MapObjectFieldsToDataRow(engine.LastRecord, dict, buffer)); if (rowCount % DataBuffer.MaxBufferRowCount == 0) { consumer.AddBufferDataTable(buffer); buffer = consumer.BufferDataTable; } if (rowCount % (numberOfBufferTablesToProcess * DataBuffer.MaxBufferRowCount) == 0) { break; } rowCount++; } } if (buffer.Rows.Count > 0) { consumer.AddBufferDataTable(buffer); } engine.Close(); }

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >