Search Results

Search found 30117 results on 1205 pages for 'thread specific storage'.

Page 127/1205 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Inside the Concurrent Collections: ConcurrentDictionary

    - by Simon Cooper
    Using locks to implement a thread-safe collection is rather like using a sledgehammer - unsubtle, easy to understand, and tends to make any other tool redundant. Unlike the previous two collections I looked at, ConcurrentStack and ConcurrentQueue, ConcurrentDictionary uses locks quite heavily. However, it is careful to wield locks only where necessary to ensure that concurrency is maximised. This will, by necessity, be a higher-level look than my other posts in this series, as there is quite a lot of code and logic in ConcurrentDictionary. Therefore, I do recommend that you have ConcurrentDictionary open in a decompiler to have a look at all the details that I skip over. The problem with locks There's several things to bear in mind when using locks, as encapsulated by the lock keyword in C# and the System.Threading.Monitor class in .NET (if you're unsure as to what lock does in C#, I briefly covered it in my first post in the series): Locks block threads The most obvious problem is that threads waiting on a lock can't do any work at all. No preparatory work, no 'optimistic' work like in ConcurrentQueue and ConcurrentStack, nothing. It sits there, waiting to be unblocked. This is bad if you're trying to maximise concurrency. Locks are slow Whereas most of the methods on the Interlocked class can be compiled down to a single CPU instruction, ensuring atomicity at the hardware level, taking out a lock requires some heavy lifting by the CLR and the operating system. There's quite a bit of work required to take out a lock, block other threads, and wake them up again. If locks are used heavily, this impacts performance. Deadlocks When using locks there's always the possibility of a deadlock - two threads, each holding a lock, each trying to aquire the other's lock. Fortunately, this can be avoided with careful programming and structured lock-taking, as we'll see. So, it's important to minimise where locks are used to maximise the concurrency and performance of the collection. Implementation As you might expect, ConcurrentDictionary is similar in basic implementation to the non-concurrent Dictionary, which I studied in a previous post. I'll be using some concepts introduced there, so I recommend you have a quick read of it. So, if you were implementing a thread-safe dictionary, what would you do? The naive implementation is to simply have a single lock around all methods accessing the dictionary. This would work, but doesn't allow much concurrency. Fortunately, the bucketing used by Dictionary allows a simple but effective improvement to this - one lock per bucket. This allows different threads modifying different buckets to do so in parallel. Any thread making changes to the contents of a bucket takes the lock for that bucket, ensuring those changes are thread-safe. The method that maps each bucket to a lock is the GetBucketAndLockNo method: private void GetBucketAndLockNo( int hashcode, out int bucketNo, out int lockNo, int bucketCount) { // the bucket number is the hashcode (without the initial sign bit) // modulo the number of buckets bucketNo = (hashcode & 0x7fffffff) % bucketCount; // and the lock number is the bucket number modulo the number of locks lockNo = bucketNo % m_locks.Length; } However, this does require some changes to how the buckets are implemented. The 'implicit' linked list within a single backing array used by the non-concurrent Dictionary adds a dependency between separate buckets, as every bucket uses the same backing array. Instead, ConcurrentDictionary uses a strict linked list on each bucket: This ensures that each bucket is entirely separate from all other buckets; adding or removing an item from a bucket is independent to any changes to other buckets. Modifying the dictionary All the operations on the dictionary follow the same basic pattern: void AlterBucket(TKey key, ...) { int bucketNo, lockNo; 1: GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, m_buckets.Length); 2: lock (m_locks[lockNo]) { 3: Node headNode = m_buckets[bucketNo]; 4: Mutate the node linked list as appropriate } } For example, when adding another entry to the dictionary, you would iterate through the linked list to check whether the key exists already, and add the new entry as the head node. When removing items, you would find the entry to remove (if it exists), and remove the node from the linked list. Adding, updating, and removing items all follow this pattern. Performance issues There is a problem we have to address at this point. If the number of buckets in the dictionary is fixed in the constructor, then the performance will degrade from O(1) to O(n) when a large number of items are added to the dictionary. As more and more items get added to the linked lists in each bucket, the lookup operations will spend most of their time traversing a linear linked list. To fix this, the buckets array has to be resized once the number of items in each bucket has gone over a certain limit. (In ConcurrentDictionary this limit is when the size of the largest bucket is greater than the number of buckets for each lock. This check is done at the end of the TryAddInternal method.) Resizing the bucket array and re-hashing everything affects every bucket in the collection. Therefore, this operation needs to take out every lock in the collection. Taking out mutiple locks at once inevitably summons the spectre of the deadlock; two threads each hold a lock, and each trying to acquire the other lock. How can we eliminate this? Simple - ensure that threads never try to 'swap' locks in this fashion. When taking out multiple locks, always take them out in the same order, and always take out all the locks you need before starting to release them. In ConcurrentDictionary, this is controlled by the AcquireLocks, AcquireAllLocks and ReleaseLocks methods. Locks are always taken out and released in the order they are in the m_locks array, and locks are all released right at the end of the method in a finally block. At this point, it's worth pointing out that the locks array is never re-assigned, even when the buckets array is increased in size. The number of locks is fixed in the constructor by the concurrencyLevel parameter. This simplifies programming the locks; you don't have to check if the locks array has changed or been re-assigned before taking out a lock object. And you can be sure that when a thread takes out a lock, another thread isn't going to re-assign the lock array. This would create a new series of lock objects, thus allowing another thread to ignore the existing locks (and any threads controlling them), breaking thread-safety. Consequences of growing the array Just because we're using locks doesn't mean that race conditions aren't a problem. We can see this by looking at the GrowTable method. The operation of this method can be boiled down to: private void GrowTable(Node[] buckets) { try { 1: Acquire first lock in the locks array // this causes any other thread trying to take out // all the locks to block because the first lock in the array // is always the one taken out first // check if another thread has already resized the buckets array // while we were waiting to acquire the first lock 2: if (buckets != m_buckets) return; 3: Calculate the new size of the backing array 4: Node[] array = new array[size]; 5: Acquire all the remaining locks 6: Re-hash the contents of the existing buckets into array 7: m_buckets = array; } finally { 8: Release all locks } } As you can see, there's already a check for a race condition at step 2, for the case when the GrowTable method is called twice in quick succession on two separate threads. One will successfully resize the buckets array (blocking the second in the meantime), when the second thread is unblocked it'll see that the array has already been resized & exit without doing anything. There is another case we need to consider; looking back at the AlterBucket method above, consider the following situation: Thread 1 calls AlterBucket; step 1 is executed to get the bucket and lock numbers. Thread 2 calls GrowTable and executes steps 1-5; thread 1 is blocked when it tries to take out the lock in step 2. Thread 2 re-hashes everything, re-assigns the buckets array, and releases all the locks (steps 6-8). Thread 1 is unblocked and continues executing, but the calculated bucket and lock numbers are no longer valid. Between calculating the correct bucket and lock number and taking out the lock, another thread has changed where everything is. Not exactly thread-safe. Well, a similar problem was solved in ConcurrentStack and ConcurrentQueue by storing a local copy of the state, doing the necessary calculations, then checking if that state is still valid. We can use a similar idea here: void AlterBucket(TKey key, ...) { while (true) { Node[] buckets = m_buckets; int bucketNo, lockNo; GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, buckets.Length); lock (m_locks[lockNo]) { // if the state has changed, go back to the start if (buckets != m_buckets) continue; Node headNode = m_buckets[bucketNo]; Mutate the node linked list as appropriate } break; } } TryGetValue and GetEnumerator And so, finally, we get onto TryGetValue and GetEnumerator. I've left these to the end because, well, they don't actually use any locks. How can this be? Whenever you change a bucket, you need to take out the corresponding lock, yes? Indeed you do. However, it is important to note that TryGetValue and GetEnumerator don't actually change anything. Just as immutable objects are, by definition, thread-safe, read-only operations don't need to take out a lock because they don't change anything. All lockless methods can happily iterate through the buckets and linked lists without worrying about locking anything. However, this does put restrictions on how the other methods operate. Because there could be another thread in the middle of reading the dictionary at any time (even if a lock is taken out), the dictionary has to be in a valid state at all times. Every change to state has to be made visible to other threads in a single atomic operation (all relevant variables are marked volatile to help with this). This restriction ensures that whatever the reading threads are doing, they never read the dictionary in an invalid state (eg items that should be in the collection temporarily removed from the linked list, or reading a node that has had it's key & value removed before the node itself has been removed from the linked list). Fortunately, all the operations needed to change the dictionary can be done in that way. Bucket resizes are made visible when the new array is assigned back to the m_buckets variable. Any additions or modifications to a node are done by creating a new node, then splicing it into the existing list using a single variable assignment. Node removals are simply done by re-assigning the node's m_next pointer. Because the dictionary can be changed by another thread during execution of the lockless methods, the GetEnumerator method is liable to return dirty reads - changes made to the dictionary after GetEnumerator was called, but before the enumeration got to that point in the dictionary. It's worth listing at this point which methods are lockless, and which take out all the locks in the dictionary to ensure they get a consistent view of the dictionary: Lockless: TryGetValue GetEnumerator The indexer getter ContainsKey Takes out every lock (lockfull?): Count IsEmpty Keys Values CopyTo ToArray Concurrent principles That covers the overall implementation of ConcurrentDictionary. I haven't even begun to scratch the surface of this sophisticated collection. That I leave to you. However, we've looked at enough to be able to extract some useful principles for concurrent programming: Partitioning When using locks, the work is partitioned into independant chunks, each with its own lock. Each partition can then be modified concurrently to other partitions. Ordered lock-taking When a method does need to control the entire collection, locks are taken and released in a fixed order to prevent deadlocks. Lockless reads Read operations that don't care about dirty reads don't take out any lock; the rest of the collection is implemented so that any reading thread always has a consistent view of the collection. That leads us to the final collection in this little series - ConcurrentBag. Lacking a non-concurrent analogy, it is quite different to any other collection in the class libraries. Prepare your thinking hats!

    Read the article

  • Incremental backups from Rackspace Cloud Files to Amazon Glacier

    - by Martin Wilson
    Is there a software product/module (open-source or commercial) that can provide incremental backups from Rackspace Cloud Files to Amazon Glacier? We are looking for something that will provide the following functionality (or achieve the same result, i.e. a cost-effective backup strategy for files stored in Rackspace Cloud Files): Work out which files have been added to or modified in a Rackspace Cloud account (since the last backup). Create a ZIP (or similar) of these files and store them in Amazon Glacier. Keep a record of which files are in which ZIPs. Ideally, restore either a single file or all files from Glacier back into Rackspace.

    Read the article

  • My uncle is the family historian. We need to host about 5-15 TB of images and video. Any inexpensive

    - by Citizen
    Basically we have hq scans of thousands of old family photos. Plus tons of family video. We want to host them where we can still have total control over the content and restrict access. I'm a php programmer, so the security is not an issue. What is an issue is finding a host to store 10 TB of data and not be paying a ton of money. We really are not planning on a lot of traffic. Maybe 1-10 visitors a day; family only. Kind of like an online library.

    Read the article

  • MediaShield RAID 5 is showing up as 760GB when the actual size is 2.7TB

    - by Ilya Volodin
    I just finished setting up Windows 2003 Server on my new server. And I started setting up a RAID 5 for it. I have 4x1TB Hard Drives. From MediaSheild RAID Utility (at boot time) the RAID size is displayed as 2.7TB. Linux also shows it as 2.7TB. However, in Windows, everything (including Windows Disk Management as well as Windows based MediaShield utility) is reporting only 760Gb. I already tried converting partitioning table to GUID from MBR, because I read somewhere that Windows can only handle up to 2TB MBR tables, that didn't help much. Tried searching for partitioning utilities that I could use, but couldn't find anything free. Formatted the disk as NTFS partition from within Linux, it stop showing in Windows all together, even MediaShield windows utility isn't showing at anymore. Windows is installed on a separate 500Gb hard drive, that's setup not to support RAID. Any ideas?

    Read the article

  • IOmeter Hanging

    - by Xiuhtecuhtli
    am attempting to setup a test bed with 2 server using IOmeter.exe I have IoMeter & Dynamo running on My Manger server @ 10.0.0.3 i have another machine running only Dynamo @ 10.0.0.4 so. on 10.0.0.4 i run "Dyanmo.exe /i 10.0.0.4 /m 10.0.0.3" at this point the Iometer.exe on 10.0.0.3 Hangs and stops responding. Any Thoughts?

    Read the article

  • Database Performance Across 3 NetApp FAS2050

    - by bjd145
    I recently inherited a SQL Server 2005 box that has LUNs mounted on it from three different NetApp FAS2050 filers. The reason there are three is because the first two filers are out of space and our databases continually need more of it. What kind of performance impact (if any) does this setup have? Is it better to move the LUNs from the two other older filers to the newer one so that we have only one filer/controller for caching, etc ? This is in general. I apologize for not having more specifics.

    Read the article

  • Is RAID 5 or SnapRAID the better alternative for a media server raid system?

    - by rubo77
    I am using a raid 5 system for my ubuntu 12.04 xmbc media server with 5 disks. Since the data isn't changing a lot and a total loss wouldn't be so bad, cause I have another backup anyway I am thinking about using SnapRAID It sais: SnapRAID is mainly targeted for a home media center, where you have a lot of big files that rarely change The main advantage for me would be power-saving, cause not all disks have to run all the time. Would you recomment using this? (with a regular resync script once a day)

    Read the article

  • Failures when copying between two external drives on the same controller

    - by Krzysztof Kosinski
    I'm encountering a weird problem which is present both on Ubuntu 9.10 and 10.04, on two different machines. When trying to copy between two external drives connected to the same USB controller, the transfer will randomly hang at some random time (after copying 300MB, 1GB, 10GB - it doesn't appear to depend on the dataset being copied). The hang appears to happen faster in 10.04. It appears to happen slower if both drives are connected to a hub. If the drives are connected to 2 distinct physical ports on the machine, the hang will be very fast. Hangs cannot be reproduced if: Data is copied from the first external drive to an internal drive, then to the second external drive Drives are connected to different USB controllers, for example the first one is connected to the built-in controller and the second one via an external PCMCIA controller. lspci says the first machine has an Intel ICH9 USB controller, the second an Intel ICH4. Is this a hardware problem, a kernel problem or a software issue? I used Nautilus when copying the files.

    Read the article

  • How can I repair my USB drive?

    - by yurko
    USB drive is in read only state and I can't repair it. First of all I tried erase it using dd: root@yurko-laptop:/home/yurko-laptop# ls -l /dev/disk/by-id | grep usb lrwxrwxrwx 1 root root 9 ??? 18 23:45 usb-Generic_Flash_Disk_C173828A-0:0 -> ../../sdb lrwxrwxrwx 1 root root 10 ??? 18 23:45 usb-Generic_Flash_Disk_C173828A-0:0-part1 -> ../../sdb1 root@yurko-laptop:/home/yurko-laptop# dd if=/dev/zero of=/dev/sdb dd: ?????? ? «/dev/sdb»: ?? ?????????? ????????? ????? 8257537+0 ??????? ??????? 8257536+0 ??????? ???????? ??????????? 4227858432 ????? (4,2 GB), 942,633 c, 4,5 MB/c After that I wanted to create new filesystem using fdisk: root@yurko-laptop:/home/yurko-laptop# fdisk /dev/sdb You will not be able to write the partition table. WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sdb: 4227 MB, 4227858432 bytes 4 heads, 63 sectors/track, 32768 cylinders Units = cylinders of 252 * 512 = 129024 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 18 32768 4126596 b W95 FAT32 Command (m for help): fdisk showed that the partition still exists and I can't write the partition table. I tried to delete the existing partition: Command (m for help): d Selected partition 1 Command (m for help): w Unable to write /dev/sdb root@yurko-laptop:/home/yurko-laptop# Why am I not be able to write the partition table? Does it mean that some hardware failure occurred? And is it possible to repair the current USB drive? I've tried to use hdparm and it showed that the readonly flag is on: root@yurko-laptop:/home/yurko-laptop# hdparm /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0a 00 00 00 00 26 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 multcount = 0 (off) readonly = 1 (on) readahead = 256 (on) geometry = 1016/131/62, sectors = 8257536, start = 0

    Read the article

  • Is it possible to have a wireless in-house NAS with wireless data transfer rates of equivalent to SATA speeds?

    - by techaddict
    Basically I would like to know, if it is possible to set up an NAS in my house to be accessed wirelessly, that can reach equivalent real-life data transfer speeds to USB 3.0 or an internal SATA hard drive. I have been wanting to do this for some time ( a couple of years now). Basically, this is what I want to do: Plug in a number of hard drives in an array, somewhere in my house, to be left plugged in and never have to be monitored. Ideally several terabytes. Whenever I am home, to have my computer and laptop configured to automatically find the NAS, as easy as plugging in an external hard drive - except completely wirelessly. Data transfer needs to be as seamless and quick as having added another internal hard drive in my laptop. Moreover, data should be able to accessed without having to copy it over - I should be able to wirelessly access the NAS and browse files, and open files directly from the NAS. For example, say I wanted to open a video - I should be able to play the video that is located on the NAS, directly from the NAS, completely wirelessly. If I wanted to open a .pdf file, I should be able to open it and read it directly from the NAS, as if it were located on my physical internal hard drive. Cost is important as well. Please tell me what equipment I need for this to be possible. I know you geniuses out there who can tell me if this is possible.

    Read the article

  • How can I access my music anywhere?

    - by musicfreak
    So here's the idea: I use multiple computers on an almost daily basis. I would like to be able to access my entire music library from any of those computers through the Internet. Is there a service or perhaps some software that would allow me to host my music "in the cloud", or in some other way access it from a different computer? I've searched for something like this and the closest I've found is the Media Player application found in Opera Unite, but that requires my home computer to be turned on at all times, which is less than ideal. I'm willing to pay, but not so much that I could just rent a private server for a lower price. Thanks in advance.

    Read the article

  • Unable to edit CIFS Share permissions

    - by Datapimp23
    Hi, I have this backup Disk to disk device HP Storageworks 2540i. Managing the device is via a web interface. I joined the device into our AD domain in the CIFS server configuration. I then created a CIFS share called backupdata. If I try to access it I'm prompted for a login. The permissions tab in the web interface is empty. The following message is displayed. "CIFS Authentication is managed through Active Directory" However I do not find the share in AD. I forced replication between all DCs and I do not find it. Is there another way to edit the permissions?

    Read the article

  • Using 2-port LSI 2308-8e card to control 24 SAS HDDs

    - by GregC
    I would like to rely on a RAID-on-chip solution to control 24 SAS hard drives in a direct-attached environment. How would you approach this to get best bandwidth given that I'd like to spend less than $10,000 on the interconnect. I've read that LSI 2308 chip can easily handle 8-drive SSD RAID6 in hardware. I'd like to harness its power to control 24 SAS hard drives over an expander in an external enclosure. Currently I use an Infortrend S24S-G2240 external enclosure. It provides its own controller and expander. I'd like to use LSI 2308 controller for RAID6 somehow instead of the mystery controller in the enclosure. P.S. I tried to create SAS-expander as a tag, but my rep on this site is low.

    Read the article

  • Recommendations for USB flash drive fast at writing small files

    - by Andrew Bainbridge
    I want a drive that I can be used as my work drive, storing a Subversion repo and sandbox for a small project. I'd also like it to be able to store a DVD rip. At the moment I've got a Super Talent pico-C 8gb. It's fast at reading and writing DVD rips, but the performance on small files (ie less than 4k) is utterly terrible (we're talking floppy disk speeds here). This Ars review measured a similar Super Talent drive and pretty much confirmed my measurements (take a look at the random write speeds on page 5). So, I'm looking for a 8gb or bigger drive that doesn't suck at read and write of small files and still has acceptable performance for very large files.

    Read the article

  • Is it generally a bad idea to have other types of virtual appliances installed along side a firewall

    - by MGSoto
    I want to run my Firewall/NAT software (pfsense) and an internal NAS (looking at freenas right now) for my SOHO on one machine. Right now I have them separated on two different machines, but I'd like to consolidate them. Is this generally a bad idea? I see the security concern where if the firewall or host OS is compromised, then your data is essentially screwed. But is it really a concern for me?

    Read the article

  • alias name in brocade

    - by wildchild
    how do i look for alias name using the WWN in brocade. Say i have created an alias "CentosNode2Port1" using switch show i get the WWN ,but how to look for the alias name. alishow (WWN) is not accepting. please help

    Read the article

  • Saving music wisely: Why save 'Queen - Bohemian Rhapsody.mp3' millions of times?

    - by hsmit
    As far as I'm concerned, Queen's song 'bohemian rhapsody' is one of the most popular songs all time. But for the purpose of this message you may replace this with another track. At the same time I think 60% of the digital-music listeners have this track. Sometimes we have multiple copies: different versions of the track, different devices, unwanted duplicates in download folders, itunes folders etc.. Wouldn't it be much smarter to store these songs only once? You can imagine various solutions for this. How would you accomplish this? Some criteria that may help you find an answer: It must reduce disk space It must remember which music belongs to you (DRM) It must use network traffic efficiently

    Read the article

  • Mount "locked" SD card as read-write in GNU/Linux

    - by Vi
    My Canon PowerShot A470 + CHDK can write to SD-cards that are "locked" (the lock switch is used to make the card bootable), but GNU/Linux `/dev/mmcblk1': Read-only file system (I'm using "Texas Instrument 5-in-1 Multimedia Card Reader") So I have to switch that switch on and off again and again. ("unlocked" to write to the card in Linux, "locked" to boot the camera from it). How to force locked card to be writable in GNU/Linux?

    Read the article

  • 4K sectors: Why are hard drives moving to 4096 byte sectors, vs. 512 byte sectors? Are 4K sectors a

    - by Chris W. Rea
    I've noticed that some Western Digital hard drives are now sporting 4K sectors, that is, the sectors are larger: 4096 bytes vs. the actual de facto standard of 512 bytes. So: What's the big deal with 4K sectors? Is it marketing hype, or a real advantage? Why should somebody building a new PC care, or not, about 4K sectors? Why is this transition taking place now? Why didn't it happen sooner? Are there things to look out for when buying a 4K sector hard drive? e.g. incompatibility? Anything else we should know about 4K sectors?

    Read the article

  • How to change I/O priority of a process or thread in Win7?

    - by romkyns
    Process Explorer is able to show the effective IO priority of a given thread, but not change it. Seeing as IO priority support is a comparatively new feature, most programs don't set their own IO priorities. It appears that by default the IO priority is derived from the thread priority (rather than process priority), which Process Explorer can't modify either. Are there any other tools out there that can help me change the IO priority of a given thread / all threads of a given process?

    Read the article

  • What causes "All-in-one USB Card Reader" to create 6 drives that always appear in Disk Management?

    - by tim11g
    I installed a "All-in-one USB Card Reader" to read SD cards and other media. It has caused six new drives to appear in Disk Management with six new drive letter assignments. These drives and letters are always present, even when there are no cards in the reader. When unused, they are labeled "No Media". Why does this multifunction reader cause these phantom Disks to appear and consume drive letters? Every USB port can (and does) allow removable media to be mounted and assigned a drive letter, and the drive letter assignment "disappears" when the USB drive is removed. Why are these card reader's drives and letters staying allocated permanently? Is there anything that can be done to make the slots work like a typical USB drive? (The reader is in fact connected to USB).

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >