Search Results

Search found 105847 results on 4234 pages for 'sql server performance'.

Page 267/4234 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • Required help to Increase the performance of the MySQL query

    - by Joseph
    Hi all, I am using a following query in MySQl for fetching data from a table. Its taking too long because the conditional check within the aggregate function.Please help how to make it faster SELECT testcharfield ,SUM(IF (Type = 'pi',quantity, 0)) AS OB ,SUM(IF (Type = 'pe',quantity, 0)) AS CB FROM Table1 WHERE sequenceID = 6107 GROUP BY testcharfield

    Read the article

  • broken SQL 2008 SP1 Web Edition (can not login with SSMS)

    - by gerryLowry
    Scenario: My installation of SQL Server 2008 Web Edition SP1 was working properly. Since I've recently joined Microsoft's Website Spark*, I removed SQL2008 and installed SQL 2008 again using my Website Spark edition and license from the MSDN download site. Next, I updated SQL 2008 to SP1 (this is required because I'm running Windows 2008 Server R2 Web edition). When I launch SSMS (SQL Server Management Studio), "User name" is "myhost\Administrator" and is greyed out so it can not be changed. When I installed my Website Spark version, I did not include "myhost\Administrator" when I was configuring SQL 2008 service accounts. Instead I created an administrator account "myhost\mySQLaccount". ERROR MESSAGE: Connect to Server (X) Cannot connect to (local) Additional information: Login failed for user 'myhost'Admistrator' (Microsoft SQL Server, Error: 18456) I tried to use the SQL Server Configuration Manager to correct this problem but could not find any useful way to fix this issue. How to I fix this problem? Connect to Server ... Server type: Database Engine Server name: (local) Authentication: Windows Authentication Please advise. Thank you. Gerry * http://www.microsoft.com/web/websitespark/default.aspx

    Read the article

  • Profile memory-performance for part of an rails project

    - by Florian Pilz
    I want to test the profile usage of an important library-class of my rails-project. It uses ActiveRecord so I need all rails dependencies to profile it. As far as I know, I need a patched ruby (rubygc) so script/profile and script/benchmark can track memory usage. I tried to follow this official guide to patch the source code of ruby 1.8.6 (p399) and 1.8.7 (p248), but both fail with the following message: patching file gc.c Hunk #2 succeeded at 50 with fuzz 2 (offset 2 lines). Hunk #3 succeeded at 87 with fuzz 2 (offset 6 lines). Hunk #4 succeeded at 153 with fuzz 1 (offset 45 lines). Hunk #5 succeeded at 409 with fuzz 2 (offset 274 lines). Hunk #6 FAILED at 462. Hunk #7 FAILED at 506. Hunk #8 FAILED at 520. Hunk #9 FAILED at 745. Hunk #10 FAILED at 754. Hunk #11 FAILED at 923. Hunk #12 succeeded at 711 (offset 46 lines). Hunk #13 succeeded at 730 (offset 46 lines). Hunk #14 succeeded at 766 (offset 55 lines). Hunk #15 succeeded at 1428 (offset 87 lines). Hunk #16 succeeded at 1492 (offset 89 lines). Hunk #17 FAILED at 1541. Hunk #18 FAILED at 1551. Hunk #19 succeeded at 1571 (offset 91 lines). Hunk #20 succeeded at 1592 (offset 91 lines). Hunk #21 succeeded at 1601 (offset 91 lines). Hunk #22 succeeded at 1826 (offset 108 lines). Hunk #23 succeeded at 1843 (offset 108 lines). Hunk #24 succeeded at 1926 (offset 108 lines). Hunk #25 succeeded at 2118 (offset 108 lines). Hunk #26 succeeded at 2563 (offset 100 lines). Hunk #27 succeeded at 2611 with fuzz 1 (offset 102 lines). Hunk #28 succeeded at 2628 (offset 102 lines). 8 out of 28 hunks FAILED -- saving rejects to file gc.c.rej patching file intern.h Hunk #1 succeeded at 268 (offset 15 lines). I also tried to use ruby-prof, but I always get the error "uninitialized constant RubyProf::Test". I don't know how to use the gem "memory" and neither "memprof" nor "bleak_house" could be installed successfully. If I get a patched ruby running, I should be fine. But any other possibility to profile the memory of library classes are welcome. Thanks for helping!

    Read the article

  • Performance issue on Android's MapView Navigation-App

    - by poeschlorn
    Hey Guys, I've got a question on making an navigation app more faster and more stable. The basic layer of my app is a simple mapview, covered with several overlays (2 markers for start and destination and one for the route). My idea is to implement a thread to display the route, so that the app won't hang up during the calculation of a more complex route (like it does right now). After implementing the thread there are no updates shows any more, maybe you can help me with a short glance at an excerpt of my code below: private class MyLocationListener implements LocationListener { @Override public void onLocationChanged(Location loc) { posUser = new GeoPoint((int) (loc.getLatitude() * 1E6), (int) (loc .getLongitude() * 1E6)); new Thread(){ public void run(){ mapView.invalidate(); // Erase old overlays mapView.getOverlays().clear(); // Draw updated overlay elements and adjust basic map settings updateText(); if(firstRefresh){ adjustMap(); firstRefresh = false; } getAndPaintRoute(); drawMarkers(); } }; } Some features have been summarized to a method like "drawMarkers()" or "updateText()"...(they don't need any more attention ;-))

    Read the article

  • Optimize code performance when odd/even threads are doing different things in CUDA

    - by Orion Nebula
    Hi all! I have two large vectors, I am trying to do some sort of element multiplication, where an even-numbered element in the first vector is multiplied by the next odd-numbered element in the second vector .... and where the odd-numbered element in the first vector is multiplied by the preceding even-numbered element in the second vector Ex. vector 1 is V1(1) V1(2) V1(3) V1(4) vector 2 is V2(1) V2(2) V2(3) V2(4) V1(1) * V2(2) V1(3) * V2(4) V1(2) * V2(1) V1(4) * V2(3) I have written a Cuda code to do this: (Pds has the elements of the first vector in shared memory, Nds the second Vector) //instead of using %2 .. i check for the first bit to decide if number is odd/even -- faster if ((tx & 0x0001) == 0x0000) Nds[tx+1] = Pds[tx] * Nds[tx+1]; else Nds[tx-1] = Pds[tx] * Nds[tx-1]; __syncthreads(); Is there anyway to further accelerate this code or avoid divergence ? Thanks

    Read the article

  • Can I store SQL Server sort order in a variable?

    - by Steve Weet
    I have the following SQL within a stored procedure. Is there a way to remove the IF statement and pass the 'ASC'/'DESC' option as a variable? I know I could do the query a number of different ways, or return a table and sort it externally etc. I would just like to know if I can avoid duplicating the CASE statement. IF @sortOrder = 'Desc' BEGIN SELECT * FROM #t_results ORDER BY CASE WHEN @OrderBy = 'surname' THEN surname END DESC, CASE WHEN @OrderBy = 'forename' THEN forename END DESC, CASE WHEN @OrderBy = 'fullName' THEN fullName END DESC, CASE WHEN @OrderBy = 'userId' THEN userId END DESC, CASE WHEN @OrderBy = 'MobileNumber' THEN MSISDN END DESC, CASE WHEN @OrderBy = 'DeviceStatus' THEN DeviceStatus END DESC, CASE WHEN @OrderBy = 'LastPosition' THEN LastPosition END DESC, CASE WHEN @OrderBy = 'LastAlert' THEN LastAlert END DESC, CASE WHEN @OrderBy = 'LastCommunication' THEN LastCommunication END DESC, CASE WHEN @OrderBy = 'LastPreAlert' THEN LastPreAlert END DESC END ELSE BEGIN SELECT * FROM #t_results ORDER BY CASE WHEN @OrderBy = 'surname' THEN surname END DESC, CASE WHEN @OrderBy = 'forename' THEN forename END DESC, CASE WHEN @OrderBy = 'fullName' THEN fullName END DESC, CASE WHEN @OrderBy = 'userId' THEN userId END DESC, CASE WHEN @OrderBy = 'MobileNumber' THEN MSISDN END DESC, CASE WHEN @OrderBy = 'DeviceStatus' THEN DeviceStatus END DESC, CASE WHEN @OrderBy = 'LastPosition' THEN LastPosition END DESC, CASE WHEN @OrderBy = 'LastAlert' THEN LastAlert END DESC, CASE WHEN @OrderBy = 'LastCommunication' THEN LastCommunication END DESC, CASE WHEN @OrderBy = 'LastPreAlert' THEN LastPreAlert END DESC END END

    Read the article

  • SQL Server Management Studio unable to connect to local instance.

    - by Ben Collins
    I'm not a DBA, I'm a developer - and I'm having trouble with SQL Server Management studio. I installed SQL Server 2008 Standard on Windows 2008 Server R2, and according to Sql Server Configuration Manager, I've got two instances: OFFICESERVERS (for sharepoint) and MSSQLSERVER. When I open SQL Server Management Studio I can only discover OFFICESERVERS. I've checked the protocol configuration for both instances and didn't see anything that indicates to me why this would be. Any hints?

    Read the article

  • How to have a where clause on an insert or an update in Linq to Sql?

    - by Kelsey
    I am trying to convert the following stored proc to a LinqToSql call (this is a simplied version of the SQL): INSERT INTO [MyTable] ([Name], [Value]) SELECT @name, @value WHERE NOT EXISTS(SELECT [Value] FROM [MyTable] WHERE [Value] = @value) The DB does not have a constraint on the field that is getting checked for so in this specific case the check needs to be made manually. Also there are many items constantly being inserted as well so I need to make sure that when this specific insert happens there is no dupe of the value field. My first hunch is to do the following: using (TransactionScope scope = new TransactionScope()) { if (Context.MyTables.SingleOrDefault(t => t.Value == in.Value) != null) { MyLinqModels.MyTable t = new MyLinqModels.MyTable() { Name = in.Name, Value = in.Value }; // Do some stuff in the transaction scope.Complete(); } } This is the first time I have really run into this scenario so I want to make sure I am going about it the right way. Does this seem correct or can anyone suggest a better way of going about it without having two seperate calls? Edit: I am running into a similar issue with an update: UPDATE [AnotherTable] SET [Code] = @code WHERE [ID] = @id AND [Code] IS NULL How would I do the same check with Linqtosql? I assume I need to do a get and then set all the values and submit but what if someone updates [Code] to something other than null from the time I do the get to when the update executes? Same problem as the insert...

    Read the article

  • XmlSerializer Performance Issue when Specifying XmlRootAttribute

    - by Dougc
    I'm currently having a really weird issue and I can't seem to figure out how to resolve it. I've got a fairly complex type which I'm trying to serialize using the XmlSerializer class. This actually functions fine and the type serializes properly, but seems to take a very long time in doing so; around 5 seconds depending on the data in the object. After a bit of profiling I've narrowed the issue down - bizarrely - to specifying an XmlRootAttribute when calling XmlSerializer.Serialize. I do this to change the name of a collection being serialized from ArrayOf to something a bit more meaningful. Once I remove the parameter the operation is almost instant! Any thoughts or suggestions would be excellent as I'm entirely stumped on this one!

    Read the article

  • Paramaterising SQL in SSIS

    - by Anonymouslemming
    Hi all, I'm trying to paramaterize some queries in SSIS. After some reading, it sounds like my best option is to create one variable that contains my base sql, another that contains my criteria and a final variable that is evaluated as an expression that includes both of these. I want to end up with an SQL query that is effectively UPDATE mytable set something='bar' where something_else='foo' So my first two variables have the scope of my package and are as follows: Name: BaseSQL Data Type: String Value: UPDATE mytable set something = 'bar' where something_else = Name: MyVariable Data Type: String Value: foo My third variable has a scope of the data flow task where I want to use this SQL and is as follows: Name: SQLQuery Data Type: String Value: @[User::BaseSQL] + "'" + @[User::MyVariable] + "'" EvaluateAsExpression: True In the OLE DB Source, I then choose my connection and 'SQL command from variable' and select User::SQLQuery from the dropdown box. The Variable Value window then displays the following: @[User::BaseSQL] + "'" + @[User::MyVariable] + "'" This is as desired, and would provide the output I want from my DB. The variable name dropdown also contains User::BaseSQL and User::MyVariable so I believe that my namespaces are correct. However, when I then click preview, I get the following error when configuring an OLE DB Source (using SQL command from variable): TITLE: Microsoft Visual Studio Error at Set runtime in DB [Set runtime in myDb DB [1]]: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80040E14. An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80040E14 Description: "Statement(s) could not be prepared.". An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80040E14 Description: "Must declare the scalar variable "@".". (Microsoft Visual Studio) Can anyone advise what I'm missing or how I can resolve this please ? Thanks in advance!

    Read the article

  • Delphi Performance: Case Versus If

    - by Andreas Rejbrand
    I guess there might be some overlapping with previous SO questions, but I could not find a Delphi-specific question on this topic. Suppose that you want to check if an unsigned 32-bit integer variable "MyAction" is equal to any of the constants ACTION1, ACTION2, ... ACTIONn, where n is - say 1000. I guess that, besides being more elegant, case MyAction of ACTION1: {code}; ACTION2: {code}; ... ACTIONn: {code}; end; if much faster than if MyAction = ACTION1 then // code else if MyAction = ACTION2 then // code ... else if MyAction = ACTIONn then // code; I guess that the if variant takes time O(n) to complete (i.e. to find the right action) if the right action ACTIONi has a high value of i, whereas the case variant takes a lot less time (O(1)?). Am I correct that switch is much faster? Am I correct that the time required to find the right action in the switch case actually is independent of n? I.e. is it true that it does not really take any longer to check a million cases than to check 10 cases? How, exactly, does this work?

    Read the article

  • Performance of looping over an Unboxed array in Haskell

    - by Joey Adams
    First of all, it's great. However, I came across a situation where my benchmarks turned up weird results. I am new to Haskell, and this is first time I've gotten my hands dirty with mutable arrays and Monads. The code below is based on this example. I wrote a generic monadic for function that takes numbers and a step function rather than a range (like forM_ does). I compared using my generic for function (Loop A) against embedding an equivalent recursive function (Loop B). Having Loop A is noticeably faster than having Loop B. Weirder, having both Loop A and B together is faster than having Loop B by itself (but slightly slower than Loop A by itself). Some possible explanations I can think of for the discrepancies. Note that these are just guesses: Something I haven't learned yet about how Haskell extracts results from monadic functions. Loop B faults the array in a less cache efficient manner than Loop A. Why? I made a dumb mistake; Loop A and Loop B are actually different. Note that in all 3 cases of having either or both Loop A and Loop B, the program produces the same output. Here is the code. I tested it with ghc -O2 for.hs using GHC version 6.10.4 . import Control.Monad import Control.Monad.ST import Data.Array.IArray import Data.Array.MArray import Data.Array.ST import Data.Array.Unboxed for :: (Num a, Ord a, Monad m) => a -> a -> (a -> a) -> (a -> m b) -> m () for start end step f = loop start where loop i | i <= end = do f i loop (step i) | otherwise = return () primesToNA :: Int -> UArray Int Bool primesToNA n = runSTUArray $ do a <- newArray (2,n) True :: ST s (STUArray s Int Bool) let sr = floor . (sqrt::Double->Double) . fromIntegral $ n+1 -- Loop A for 4 n (+ 2) $ \j -> writeArray a j False -- Loop B let f i | i <= n = do writeArray a i False f (i+2) | otherwise = return () in f 4 forM_ [3,5..sr] $ \i -> do si <- readArray a i when si $ forM_ [i*i,i*i+i+i..n] $ \j -> writeArray a j False return a primesTo :: Int -> [Int] primesTo n = [i | (i,p) <- assocs . primesToNA $ n, p] main = print $ primesTo 30000000

    Read the article

  • How can I automatically generate sql update scripts when some data is updated ?

    - by Brann
    I'd like to automatically generate an update script each time a value is modified in my database. In other words, if a stored procedure, or a query, or whatever updates column a with value b in table c (which as a pk column (i,j...k), I want to generate this : update c set a=b where i=... and j=... and k=... and store it somewhere (for example as a raw string in a table). To complicate things, I want the script to be generated only if the update has been made by a specific user. Good news is that I've got a primary key defined for all my tables. I can see how to do this using a trigger, but I would need to generate specific triggers for each table, and to update them each and every-time my schema changes. I guess there are some built-in ways to do this as SQL server sometimes need to store this kind of things (while using transactional replication for example), but couldn't find anything so far ... any ideas ? I'm also interested in ways to automatically generate triggers (probably using triggers - meta triggers, huh? - since I will need to update triggers automatically when the schema change)

    Read the article

  • Javascript fine grain performance tweaking

    - by thermal7
    I have been writing my first jQuery plugin and struggling to find a means to time how long different pieces of code take to run. I can use firebug and console.time/profile. However, it seems that because my code executes so fast I get no results with profile and with time it spits out 0ms. (http://stackoverflow.com/questions/2690697/firebug-profiling-issue-no-activity-to-profile/2690846#2690846) Is there a way to get the time at a greater level of detail that milliseconds in javascript?

    Read the article

  • Performance: float to int cast and clipping result to range

    - by durandai
    I'm doing some audio processing with float. The result needs to be converted back to PCM samples, and I noticed that the cast from float to int is surprisingly expensive. Whats furthermore frustrating that I need to clip the result to the range of a short (-32768 to 32767). While I would normally instictively assume that this could be assured by simply casting float to short, this fails miserably in Java, since on the bytecode level it results in F2I followed by I2S. So instead of a simple: int sample = (short) flotVal; I needed to resort to this ugly sequence: int sample = (int) floatVal; if (sample > 32767) { sample = 32767; } else if (sample < -32768) { sample = -32768; } Is there a faster way to do this? (about ~6% of the total runtime seems to be spent on casting, while 6% seem to be not that much at first glance, its astounding when I consider that the processing part involves a good chunk of matrix multiplications and IDCT) EDIT The cast/clipping code above is (not surprisingly) in the body of a loop that reads float values from a float[] and puts them into a byte[]. I have a test suite that measures total runtime on several test cases (processing about 200MB of raw audio data). The 6% were concluded from the runtime difference when the cast assignment "int sample = (int) floatVal" was replaced by assigning the loop index to sample. EDIT @leopoldkot: I'm aware of the truncation in Java, as stated in the original question (F2I, I2S bytecode sequence). I only tried the cast to short because I assumed that Java had an F2S bytecode, which it unfortunately does not (comming originally from an 68K assembly background, where a simple "fmove.w FP0, D0" would have done exactly what I wanted).

    Read the article

  • APplication performance issue : SqlServer & Oracle

    - by Mahesh
    Hi, We have a applicaiton in Silverlight,WCF, NHibernate. Currently it is supporting SQL Serve and Oracle database. As it's huge data, it is running ok on SQL Sevrer. But on Oracle it is running very slow. For one functionality it takes 5 Sec to execute on SQL Server and 30 Sec on Oracle. I am not able to figure out what will be issue. Two things that i want to share with you about our database. 1) Database: contains one base table contains column of type SQLServer: [Text] Oracle: [NCLOB] 2) Our database structure is too much normalized. May be in the oracle i have used NCLOB, that is the cause of the performance. I mean i don't know the details about it.... Can anyone please let me know what will be cause? Or Which actions do i need to follw to improve the performance as equal as SqlServer.? Thanks in advance. Mahesh.

    Read the article

  • Lighttpd server is stopped

    - by tomaszs
    I have a Lighttpd server plus mod_fastcgi. And today I had Internal Server Error 500. I've checked my error log and it goes like this: 2010-04-22 22:59:14: (server.c.1464) server stopped by UID = 0 PID = 3332 2010-04-22 22:59:15: (mod_fastcgi.c.1768) connect failed: No such file or directory on unix:/tmp/php.socket-5 2010-04-22 22:59:15: (mod_fastcgi.c.2956) backend died; we'll disable it for 5 seconds and send the request to another backend instead: reconnects: 0 load: 1 2010-04-22 22:59:15: (mod_fastcgi.c.2709) child died somehow, waitpid failed: 10 2010-04-22 22:59:15: (server.c.1464) server stopped by UID = 0 PID = 3332 2010-04-22 22:59:15: (server.c.1464) server stopped by UID = 48 PID = 1385 2010-04-22 22:59:15: (server.c.1464) server stopped by UID = 48 PID = 1385 2010-04-22 22:59:15: (server.c.1464) server stopped by UID = 48 PID = 1385 2010-04-22 22:59:15: (server.c.1464) server stopped by UID = 48 PID = 1385 What to do to find out what can be cause of this?

    Read the article

  • Performance impact when using XML columns in a table with MS SQL 2008

    - by Sam Dahan
    I am using a simple table with 6 columns, 3 of which are of XML type, not schema-constrained. When the table reaches a size around 120,000 or 150,000 rows, I see a dramatic performance cost in doing any query in the table. For comparison, I have another table, which grows in size at about the same rate, but only contain scalar types (int, datetime, a few float columns). That table performs perfectly fine even after 200,000 rows. And by the way, I am not using XQuery on the xml columns, i am only using regular SQL query statements. Some specifics: both tables contain a DateTime field called SampleTime. a statement like (it's in a stored procedure but I show you the actual statement) SELECT MAX(sampleTime) SampleTime FROM dbo.MyRecords WHERE PlacementID=@somenumber takes 0 seconds on the table without xml columns, and anything from 13 to 20 seconds on the table with XML columns. That depends on which drive I set my database on. At the moment it sits on a different spindle (not C:) and it takes 13 seconds. Has anyone seen this behavior before, or have any hint at what I am doing wrong? I tried this with SQL 2008 EXPRESS and the full-blown SQL Server 2008, that made no difference. Oh, one last detail: I am doing this from a C# application, .NET 3.5, using SqlConnection, SqlReader, etc.. I'd appreciate some insight into that, thanks! Sam

    Read the article

  • Cassandra performance slow down with counter column

    - by tubcvt
    I have a cluster (4 node ) and a node have 16 core and 24 gb ram: 192.168.23.114 datacenter1 rack1 Up Normal 44.48 GB 25.00% 192.168.23.115 datacenter1 rack1 Up Normal 44.51 GB 25.00% 192.168.23.116 datacenter1 rack1 Up Normal 44.51 GB 25.00% 192.168.23.117 datacenter1 rack1 Up Normal 44.51 GB 25.00% We use about 10 column family (counter column) to make some system statistic report. Problem on here is that When i set replication_factor of this keyspace from 1 to 2 (contain 10 counter column family ), all cpu of node increase from 10% ( when use replication factor=1) to --- 90%. :( :( who can help me work around that :( . why counter column consume too much cpu time :(. thanks all

    Read the article

  • What performance overhead do IoC containers involve?

    - by Sosh
    Hi, Loose coupling is wonderful of course, but I have often wondered what overhead wiring up dynamically using an IoC container (for example Castle Windsor) has over a tightly coupled system? I know that a detailed answer would depend on what the IoC was being used for, but I'm really just trying to get a feel for the magnitude of effort involved in the IoC work. Does anyone have any stats or other resources regarding this? Thanks

    Read the article

  • How significant are JPA lazy loading performance benefits?

    - by Robert
    I understand that this is highly specific to the concrete application, but I'm just wondering what's the general opinion, or at least some personal experiences on the issue. I have an aversion towards the 'open session in view' pattern, so to avoid it, I'm thinking about simply fetching everything small eagerly, and using queries in the service layer to fetch larger stuff. Has anyone used this and regretted it? And is there maybe some elegant solution to lazy loading in the view layer that I'm not aware of?

    Read the article

  • stored procedures and triggers the same in sql server than mysql?

    - by user314877
    Hello, I have two stored procedures and one trigger for a class I am taking that are done in sql server 2005. I want to recreate them in mysql with phpmyadmin. Everytime i put them in I get a 1064 error. Is this becuase phpmyadmin does not handle stored procedures or triggers well, or is it becuase the syntax from sql server 2005 to mysql is very different? If it is this, then where could i go online to learn the correct syntax to write my stored procedures and triggers in mysql/phpmyadmin... or should I just pick up a book at barnes and nobles? This is my senior project.

    Read the article

  • Performance Problem with Clojure Array

    - by dbyrne
    This piece of code is very slow. Execution from the slime-repl on my netbook takes a couple minutes. Am I doing something wrong? (def test-array (make-array Integer/TYPE 400 400 3)) (doseq [x (range 400), y (range 400), z (range 3)] (aset test-array x y z 0))

    Read the article

  • Performance of C# method polymorphism with generics

    - by zildjohn01
    I noticed in C#, unlike C++, you can combine virtual and generic methods. For example: using System.Diagnostics; class Base { public virtual void Concrete() {Debug.WriteLine("base concrete");} public virtual void Generic<T>() {Debug.WriteLine("base generic");} } class Derived : Base { public override void Concrete() {Debug.WriteLine("derived concrete");} public override void Generic<T>() {Debug.WriteLine("derived generic");} } class App { static void Main() { Base x = new Derived(); x.Concrete(); x.Generic<PerformanceCounter>(); } } Given that any number of versions of Generic<T> could be instantiated, it doesn't look like the standard vtbl approach could be used to resolve method calls, and in fact it's not. Here's the generated code: x.Concrete(); mov ecx,dword ptr [ebp-8] mov eax,dword ptr [ecx] call dword ptr [eax+38h] x.Generic<PerformanceCounter>(); push 989A38h mov ecx,dword ptr [ebp-8] mov edx,989914h call 76A874F1 mov dword ptr [ebp-4],eax mov ecx,dword ptr [ebp-8] call dword ptr [ebp-4] The extra code appears to be looking up a dynamic vtbl according to the generic parameters, and then calling into it. Has anyone written about the specifics of this implementation? How well does it perform compared to the non-generic case?

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >