Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 477/594 | < Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >

  • XCP Project Kronos syslog error: "irq ... : nobody cared" on Dom0 host

    - by Vlad Fedin
    One of our production clusters driven by XCP suddenly went uresponsive. After restart and some investigation we found such logs in dom0 machine syslog: Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659040] irq 339: nobody cared (try booting with the "irqpoll" option) Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659058] Pid: 0, comm: swapper/3 Tainted: G C O 3.2.0-24-generic #37-Ubuntu Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659060] Call Trace: Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659062] <IRQ> [<ffffffff810db37d>] __report_bad_irq+0x3d/0xe0 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659071] [<ffffffff810db605>] note_interrupt+0x135/0x190 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659074] [<ffffffff810d8e69>] handle_irq_event_percpu+0xa9/0x220 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659078] [<ffffffff8130ff3b>] ? radix_tree_lookup+0xb/0x10 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659081] [<ffffffff810d9031>] handle_irq_event+0x51/0x80 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659084] [<ffffffff810dc187>] handle_edge_irq+0x87/0x140 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659089] [<ffffffff813a8829>] __xen_evtchn_do_upcall+0x199/0x250 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659092] [<ffffffff813aa96f>] xen_evtchn_do_upcall+0x2f/0x50 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659096] [<ffffffff81666d3e>] xen_do_hypervisor_callback+0x1e/0x30 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659097] <EOI> [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659104] [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659107] [<ffffffff8100a1d0>] ? xen_safe_halt+0x10/0x20 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659110] [<fff IRQ 339 in cat /proc/interrupts: 339: ... xen-pirq-msi-x eth0 where eth0 is hardware NIC. While host machine seems to hang, guest machines continue to work, so our tiny internal monitoring on one of the virtual hosts logged something like that: [2012-10-26 20:31:51] [OK......] 200 OK : 113159149 ns [2012-10-26 20:32:40] [DISASTER] 500 Can't connect to [hostname]:80 (No route to host) : 47763284432 ns ... [2012-10-26 20:34:40] [DISASTER] 500 Can't connect to [hostname]:80 (No route to host) : 46894835070 ns [2012-10-26 20:34:57] [DISASTER] 500 Can't connect to [hostname]:80 (Bad hostname) : 16821741955 ns ... [2012-10-26 20:38:18] [DISASTER] 500 Can't connect to [hostname]:80 (Bad hostname) : 20103298289 ns [2012-10-26 20:38:37] [DISASTER] 500 Can't connect to [hostname]:80 (Bad hostname) : 17895754943 ns Host and guest OS: Ubuntu 12.04 LTS, 05:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection Subsystem: ASUSTeK Computer Inc. Device 8369 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx+ Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 17 Region 0: Memory at fe500000 (32-bit, non-prefetchable) [size=128K] Region 2: I/O ports at e000 [size=32] Region 3: Memory at fe520000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: e1000e Kernel modules: e1000e Any hints how to debug this?

    Read the article

  • How many parameters in C# method are acceptable?

    - by Valentin Heinitz
    I am new to C# and have to maintain a C# Application. Now I'v found a method vaving 32 Parameters (not auto-generated code). From C/C++ I remember the rule of thumb "4 Parameters". It may be an old-fashioned rule rooting back to old 0x86 compilers, where 4 Parameters could be accomodated in registers (fast) or on stack otherwise. I am not concerning about performance, but I do have a feeling, that 32 parameters per functions are not easy to maintain even in C#. Or am I completly not up to date? What is the rule of thumb for C#? Thank you for any hint!

    Read the article

  • How would MVVM be for games?

    - by Benny Jobigan
    Particularly for 2d games, and particularly silverlight/wpf games. If you think about it, you can divide a game object into its view (the graphic on the screen) and a view-model/model (the state, ai, and other data for the object). In silverlight, it seems common to make each object a user control, putting the model and view into a single object. I suppose the advantage of this is simplicity. But, perhaps it's less clean or has some disadvantages in terms of the underlying "game engine". What are your thoughts on this matter? What are some advantages and disadvantages of using the MVVM pattern for game development? How about performance? All thoughts are welcome.

    Read the article

  • Copying data from STDOUT to a remote machine using SFTP

    - by freddie
    In order to backup large database partitions to a remote machine using SFTP, I'd like to use the databases dump command and send it directly over using SFTP to a remote location. This is useful when needing to dump large data sets when you don't have enough local disk space to create the backup file, and then copy it to a remote location. I've tried using python + paramiko which provides this functionality, but the performance much worse than using the native openssh/sftp binary to transfer files. Does anyone have any idea on how to do this either with the native sftp client on linux, or some library like paramiko? (but one that performs close to the native sftp client)?

    Read the article

  • Javscript filter vs map problem

    - by graham.reeds
    As a continuation of my min/max across an array of objects I was wondering about the performance comparisons of filter vs map. So I put together a test on the values in my code as was going to look at the results in FireBug. This is the code: var _vec = this.vec; min_x = Math.min.apply(Math, _vec.filter(function(el){ return el["x"]; })); min_y = Math.min.apply(Math, _vec.map(function(el){ return el["x"]; })); The mapped version returns the correct result. However the filtered version returns NaN. Breaking it out, stepping through and finally inspecting the results, it would appear that the inner function returns the x property of _vec but the actual array returned from filter is the unfiltered _vec. I believe my usage of filter is correct - can anyone else see my problem?

    Read the article

  • Trying to reduce the speed overhead of an almost-but-not-quite-int number class

    - by Fumiyo Eda
    I have implemented a C++ class which behaves very similarly to the standard int type. The difference is that it has an additional concept of "epsilon" which represents some tiny value that is much less than 1, but greater than 0. One way to think of it is as a very wide fixed point number with 32 MSBs (the integer parts), 32 LSBs (the epsilon parts) and a huge sea of zeros in between. The following class works, but introduces a ~2x speed penalty in the overall program. (The program includes code that has nothing to do with this class, so the actual speed penalty of this class is probably much greater than 2x.) I can't paste the code that is using this class, but I can say the following: +, -, +=, <, > and >= are the only heavily used operators. Use of setEpsilon() and getInt() is extremely rare. * is also rare, and does not even need to consider the epsilon values at all. Here is the class: #include <limits> struct int32Uepsilon { typedef int32Uepsilon Self; int32Uepsilon () { _value = 0; _eps = 0; } int32Uepsilon (const int &i) { _value = i; _eps = 0; } void setEpsilon() { _eps = 1; } Self operator+(const Self &rhs) const { Self result = *this; result._value += rhs._value; result._eps += rhs._eps; return result; } Self operator-(const Self &rhs) const { Self result = *this; result._value -= rhs._value; result._eps -= rhs._eps; return result; } Self operator-( ) const { Self result = *this; result._value = -result._value; result._eps = -result._eps; return result; } Self operator*(const Self &rhs) const { return this->getInt() * rhs.getInt(); } // XXX: discards epsilon bool operator<(const Self &rhs) const { return (_value < rhs._value) || (_value == rhs._value && _eps < rhs._eps); } bool operator>(const Self &rhs) const { return (_value > rhs._value) || (_value == rhs._value && _eps > rhs._eps); } bool operator>=(const Self &rhs) const { return (_value >= rhs._value) || (_value == rhs._value && _eps >= rhs._eps); } Self &operator+=(const Self &rhs) { this->_value += rhs._value; this->_eps += rhs._eps; return *this; } Self &operator-=(const Self &rhs) { this->_value -= rhs._value; this->_eps -= rhs._eps; return *this; } int getInt() const { return(_value); } private: int _value; int _eps; }; namespace std { template<> struct numeric_limits<int32Uepsilon> { static const bool is_signed = true; static int max() { return 2147483647; } } }; The code above works, but it is quite slow. Does anyone have any ideas on how to improve performance? There are a few hints/details I can give that might be helpful: 32 bits are definitely insufficient to hold both _value and _eps. In practice, up to 24 ~ 28 bits of _value are used and up to 20 bits of _eps are used. I could not measure a significant performance difference between using int32_t and int64_t, so memory overhead itself is probably not the problem here. Saturating addition/subtraction on _eps would be cool, but isn't really necessary. Note that the signs of _value and _eps are not necessarily the same! This broke my first attempt at speeding this class up. Inline assembly is no problem, so long as it works with GCC on a Core i7 system running Linux!

    Read the article

  • Will GTK's pango and cairo work well in Cocoa and MFC applications.

    - by Lothar
    I'm writing a GUI program and decided to go native on all platforms. But for all the stuff i need to draw myself i would like to use the same drawing routines because font and unicode handling is so difficult and complex. Do you see any negative points in useing Pango/Cairo. Well on MacOSX i havent succeded installing Pango/Cairo yet. Looks like a bad Omen. I would also like to hear about the performance penality. The first time i looked at Pango i thought, yes thats the reason why Software is still getting despite better hardware.

    Read the article

  • Interpreters: How much simplification?

    - by Ray
    In my interpreter, code like the following x=(y+4)*z echo x parses and "optimizes" down to four single operations performed by the interpreter, pretty much assembly-like: add 4 to y multiply <last operation result> with z set x to <last operation result> echo x In modern interpreters (for example: CPython, Ruby, PHP), how simplified are the "opcodes" for which are in end-effect run by the interpreter? Could I achieve better performance when trying to keep the structures and commands for the interpreter more complex and high-level? That would be surely a lot harder, or?

    Read the article

  • Java 2D clip area to shape

    - by user2923880
    I'm quite new to graphics in java and I'm trying to create a shape that clips to the bottom of another shape. Here is an example of what I'm trying to achieve: Where the white line at the base of the shape is the sort of clipped within the round edges. The current way I am doing this is like so: g2.setColor(gray); Shape shape = getShape(); //round rectangle g2.fill(shape); Rectangle rect = new Rectangle(shape.getBounds().x, shape.getBounds().y, width, height - 3); Area area = new Area(shape); area.subtract(new Area(rect)); g2.setColor(white); g2.fill(area); I'm still experimenting with the clip methods but I can't seem to get it right. Is this current method ok (performance wise, since the component repaints quite often) or is there a more efficient way? Thanks in advance.

    Read the article

  • "Out of Memory" error in Lotus Notes automation from VBA

    - by PowerUser
    This VBA function sporadically fails with a Notes automation error "Run-Time Error '7' Out of Memory". Naturally, when I try to manually reproduce it, everything runs fine. Function ToGMT(ByVal X As Date) As Date Static NtSession As NotesSession If NtSession Is Nothing Then Set NtSession = New NotesSession NtSession.Initialize End If (do stuff) End function To put this in context, this VBA function is being called by an Access query, 3-4 times per record, with 20,000 records. For performance reasons, the NotesSession has been made static. Any ideas why it is sporadically giving an out-of-memory error? (Also, I'm initiating the NotesSession just so I can convert a datetime to GMT using Lotus's rules. If you know a better way, I'm listening).

    Read the article

  • Asp .Net MVC Viewmodel should be class or struct?

    - by Jonas Everest
    Hey guys, I have just been thinking about the concept of view model object we create in asp.net MVC. Our purpose is to instantiate it and pass it from controller to view and view read it and display the data. Those view model are usually instantiated through constructor. We won't need to initialize the members, we may not need to redefine/override parameterless constructor and we don't need inheritance feature there. So, why don't we use struct type for our view model instead of class. It will enhance the performance.

    Read the article

  • NHibernate parent-childs save redundant sql update executed

    - by Broken Pipe
    I'm trying to save (insert) parent object with a collection of child objects, all objects are new. I prefer to manually specify what to save\update and when so I do not use any cascade saves in mappings and flush sessions by myself. So basically I save this object graph like: session.Save(Parent) foreach (var child in Parent.Childs) { session.Save(child); } session.Flush() I expect this code to insert Parent row, then each child row, however NHibernate executes this SQL: INSERT INTO PARENT.... INSERT INTO CHILD .... UPDATE CHILD SET ParentId=@1 WHERE Id=@2 This update statement is absolutely unnecessary, ParentId was already set correctly in INSERT. How do I get rid of it? Performance is very important for me.

    Read the article

  • Morfik - suitability for medium-scale web enterprise applications

    - by MaikB
    I'm investigating technologies with which to develop a medium-scale (up to 100 or 200 simultaneous users) database-driven web application, and someone suggested Morfik. However, outside of the Morfik company I can find practically zero community support - no active blogs, no tutorials, no videos, no books - and this is of some concern (especially when compared to C# / ASP.NET / nHibernate etc support). Deciding between Morfik (untried and not used widely AFAIK) and the other technologies I mentioned (tried, tested, used widely) is becoming a critical issue for my company. Has anyone had success using Morfik in these kind of circumstances? What kind of performance did you achieve?

    Read the article

  • Thread.sleep vs Monitor.Wait vs RegisteredWaitHandle?

    - by Royi Namir
    (the following items has different goals , but im interesting knowing how they "PAUSEd") questions Thread.sleep - Does it impact performance on a system ?does it tie up a thread with its wait ? what about Monitor.Wait ? what is the difference in the way they "wait"? do they tie up a thread with their wait ? what aboutRegisteredWaitHandle ? This method accepts a delegate that is executed when a wait handle is signaled. While it’s waiting, it doesn’t tie up a thread. so some thread are paused and can be woken by a delegate , while others just wait ? spin ? can someone please make things clearer ? edit http://www.albahari.com/threading/part2.aspx

    Read the article

  • asp.net membership provider api. usability. best-practice

    - by Andrew Florko
    Hello everybody, Membership/Role/Profile providers API appeared in early days of asp.net Nearly everytime I can't live with standard API & have to add some extra functionality (for sorting, retrieving e.t.c.). I also have to use different database structure often (with foreign key to some tables for example) or think about performance improvements. These considerations forced teams I took part in to build own providers but I can't stand to implement providers API (because we don't use 70% of standard functionality at least). Moreover, providers that were built for exact projects were rarely reused. I wonder if someone found swiss-knife early-days-API providers implementation that is usefull for any kind of project without refactoring... Or do you use your own implementations of early-days-API's Or may be you abandon standard architecture and use lightweight implementations ? Thank you in advance

    Read the article

  • How to scale an image (in data URI format) in JavaScript (real scaling, not using styling)

    - by 103067513055141045393
    We are capturing a visible tab in a Chrome browser (by using the extensions API chrome.tabs.captureVisibleTab) and receiving a snapshot in the data URI scheme (Base64 encoded string). Is there a JavaScript library that can be used to scale down an image to a certain size? Currently we are styling it via CSS, but have to pay performance penalties as pictures are mostly 100 times bigger than required. Additional concern is also the load on the localStorage we use to save our snapshots. Does anyone know of a way to process this data URI scheme formatted pictures and reduce their size by scaling them down? References: Data URI scheme on http://en.wikipedia.org/wiki/Data_URI_scheme Chrome Extensions API onhttp://code.google.com/chrome/extensions/tabs.html The "Recently Closed Tabs" Chrome Extension onhttp://code.google.com/p/recently-closed-tabs

    Read the article

  • Possible to InvalidateVisual() on a given region instead of entire WPF control?

    - by Scott Bilas
    I have a complex WPF control that draws a lot of primitives in its OnRender (it's sort of like a map). When a small portion of it changes, I'd only like to re-issue render commands for the affected elements, instead of running the entire OnRender over. While I'm fine with my OnRender function's performance on a resize or whatever, it's not fast enough for mouse hover-based highlighting of primitives. Currently the only way I know how to force a screen update is to call InvalidateVisual(). No way to send in a dirty rect region to invalidate. Is the lowest granularity of WPF screen composition the UI element? Will I need to do my renders of primitives into an intermediate target and then have that use InvalidateVisual() to update to the screen?

    Read the article

  • LdapErr: DSID-0C0903AA, data 52e: authenticating against AD '08 with pam_ldap

    - by Stefan M
    I have full admin access to the AD '08 server I'm trying to authenticate towards. The error code means invalid credentials, but I wish this was as simple as me typing in the wrong password. First of all, I have a working Apache mod_ldap configuration against the same domain. AuthType basic AuthName "MYDOMAIN" AuthBasicProvider ldap AuthLDAPUrl "ldap://10.220.100.10/OU=Companies,MYCOMPANY,DC=southit,DC=inet?sAMAccountName?sub?(objectClass=user)" AuthLDAPBindDN svc_webaccess_auth AuthLDAPBindPassword mySvcWebAccessPassword Require ldap-group CN=Service_WebAccess,OU=Groups,OU=MYCOMPANY,DC=southit,DC=inet I'm showing this because it works without the use of any Kerberos, as so many other guides out there recommend for system authentication to AD. Now I want to translate this into pam_ldap.conf for use with OpenSSH. The /etc/pam.d/common-auth part is simple. auth sufficient pam_ldap.so debug This line is processed before any other. I believe the real issue is configuring pam_ldap.conf. host 10.220.100.10 base OU=Companies,MYCOMPANY,DC=southit,DC=inet ldap_version 3 binddn svc_webaccess_auth bindpw mySvcWebAccessPassword scope sub timelimit 30 pam_filter objectclass=User nss_map_attribute uid sAMAccountName pam_login_attribute sAMAccountName pam_password ad Now I've been monitoring ldap traffic on the AD host using wireshark. I've captured a successful session from Apache's mod_ldap and compared it to a failed session from pam_ldap. The first bindrequest is a success using the svc_webaccess_auth account, the searchrequest is a success and returns a result of 1. The last bindrequest using my user is a failure and returns the above error code. Everything looks identical except for this one line in the filter for the searchrequest, here showing mod_ldap. Filter: (&(objectClass=user)(sAMAccountName=ivasta)) The second one is pam_ldap. Filter: (&(&(objectclass=User)(objectclass=User))(sAMAccountName=ivasta)) My user is named ivasta. However, the searchrequest does not return failure, it does return 1 result. I've also tried this with ldapsearch on the cli. It's the bindrequest that follows the searchrequest that fails with the above error code 52e. Here is the failure message of the final bindrequest. resultcode: invalidcredentials (49) 80090308: LdapErr: DSID-0C0903AA, comment: AcceptSecurityContext error, data 52e, v1772 This should mean invalid password but I've tried with other users and with very simple passwords. Does anyone recognize this from their own struggles with pam_ldap and AD? Edit: Worth noting is that I've also tried pam_password crypt, and pam_filter sAMAccountName=User because this worked when using ldapsearch. ldapsearch -LLL -h 10.220.100.10 -x -b "ou=Users,ou=mycompany,dc=southit,dc=inet" -v -s sub -D svc_webaccess_auth -W '(sAMAccountName=ivasta)' This works using the svc_webaccess_auth account password. This account has scan access to that OU for use with apache's mod_ldap.

    Read the article

  • Scripting language to embed into a Java server application

    - by Alexey Kalmykov
    I want to make a business logic of server side Java application as a set of scripts. So I need from a scripting engine: Maximum Java interoperability (i.e. Spring framework) Script reloading and recompiling Easy DB access from scripting language Clear and simple syntax (some DSL capabilities would be nice to have), easy learning curve for non-hardcore developers Performance and stability I had some experience in the similar project with Rhino and it was pretty good. But I want to see if there is something better. Currently I'm looking into Groovy. JRuby and Jython are a bit more complex than I need for this task. Any other suggestion? What to take into consideration?

    Read the article

  • Virtual and Physical Memory / OutOfMemoryException

    - by user417518
    Hi, I am working on a 64-bit .Net Windows Service application that essentially loads up a bunch of data for processing. While performing data volume testing, we were able to overwhelm the process and it threw an OutOfMemoryException (I do not have any performance statistics on the process when it failed.) I have a hard time believing that the process requested a chunk of memory that would have exceeded the allowable address space for the process since its running on a 64-bit machine. I do know that the process is running on a machine that is consistently in the neighborhood of 80%-90% physical memory usage. My question is: Can the CLR throw an OutOfMemoryException if the machine is critically low on available physical memory even though the process wouldn't exceed it's allowable amount of virtual memory? Thanks for your help!

    Read the article

  • How to reliably measure available memory in Linux?

    - by Alex B
    Linux /proc/meminfo shows a number of memory usage statistics. MemTotal: 4040732 kB MemFree: 23160 kB Buffers: 163340 kB Cached: 3707080 kB SwapCached: 0 kB Active: 1129324 kB Inactive: 2762912 kB There is quite a bit of overlap between them. For example, as far as I understand, there can be active page cache (belongs to "cached" and "active") and inactive page cache ("inactive" + "cached"). What I want to do is to measure "free" memory, but in a way that it includes used pages that are likely to be dropped without a significant impact on overall system's performance. At first, I was inclined to use "free" + "inactive", but Linux's "free" utility uses "free" + "cached" in its "buffer-adjusted" display, so I am curious what a better approach is. When the kernel runs out of memory, what is the priority of pages to drop and what is the more appropriate metric to measure available memory?

    Read the article

  • foreign key constraints on primary key columns - issues ?

    - by zzzeek
    What are the pros/cons from a performance/indexing/data management perspective of creating a one-to-one relationship between tables using the primary key on the child as foreign key, versus a pure surrogate primary key on the child? The first approach seems to reduce redundancy and nicely constrains the one-to-one implicitly, while the second approach seems to be favored by DBAs, even though it creates a second index: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key references parent(id), data varchar(50) ) pure surrogate key: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key, parent_id integer unique references parent(id), data varchar(50) ) the platforms of interest here are Postgresql, Microsoft SQL Server.

    Read the article

  • assistance with classifying tests

    - by amateur
    I have a .net c# library that I have created that I am currently creating some unit tests for. I am at present writing unit tests for a cache provider class that I have created. Being new to writing unit tests I have 2 questions These being: My cache provider class is the abstraction layer to my distributed cache - AppFabric. So to test aspects of my cache provider class such as adding to appfabric cache, removing from cache etc involves communicating with appfabric. Therefore the tests to test for such, are they still categorised as unit tests or integration tests? The above methods I am testing due to interacting with appfabric, I would like to time such methods. If they take longer than a specified benchmark, the tests have failed. Again I ask the question, can this performance benchmark test be classifed as a unit test? The way I have my tests set up I want to include all unit tests together, integration tests together etc, therefore I ask these questions that I would appreciate input on.

    Read the article

  • Threading vs single thread

    - by user177883
    Is it always guaranteed that a multi-threaded application would run faster than a single threaded application? I have two threads that populates data from a data source but different entities (eg: database, from two different tables), seems like single threaded version of the application is running faster than the version with two threads. Why would the reason be? when i look at the performance monitor, both cpu s are very spikey ? is this due to context switching? what are the best practices to jack the CPU and fully utilize it? I hope this is not ambiguous.

    Read the article

  • Is ReaderWriterLockSlim.EnterUpgradeableReadLock() essentially the same as Monitor.Enter()?

    - by Neil Barnwell
    So I have a situation where I may have many, many reads and only the occasional write to a resource shared between multiple threads. A long time ago I read about ReaderWriterLock, and have read about ReaderWriterGate which attempts to mitigate the issue where many writes coming in trump reads and hurt performance. However, now I've become aware of ReaderWriterLockSlim... From the docs, I believe that there can only be one thread in "upgradeable mode" at any one time. In a situation where the only access I'm using is EnterUpgradeableReadLock() (which is appropriate for my scenario) then is there much difference to just sticking with lock(){}? Here's the excerpt: A thread that tries to enter upgradeable mode blocks if there is already a thread in upgradeable mode, if there are threads waiting to enter write mode, or if there is a single thread in write mode. Or, does the recursion policy make any difference to this?

    Read the article

< Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >