How do actually castings work at the CLR level?

Posted by devoured elysium on Stack Overflow See other posts from Stack Overflow or by devoured elysium
Published on 2010-05-06T18:38:26Z Indexed on 2010/05/06 18:48 UTC
Read the original article Hit count: 233

Filed under:
|
|
|

When doing an upcast or downcast, what does really happen behind the scenes? I had the idea that when doing something as:

string myString = "abc";
object myObject = myString;
string myStringBack = (string)myObject;

the cast in the last line would have as only purpose tell the compiler we are safe we are not doing anything wrong. So, I had the idea that actually no casting code would be embedded in the code itself. It seems I was wrong:

.maxstack 1
.locals init (
    [0] string myString,
    [1] object myObject,
    [2] string myStringBack)
L_0000: nop 
L_0001: ldstr "abc"
L_0006: stloc.0 
L_0007: ldloc.0 
L_0008: stloc.1 
L_0009: ldloc.1 
L_000a: castclass string
L_000f: stloc.2 
L_0010: ret 

Why does the CLR need something like castclass string?

There are two possible implementations for a downcast:

  1. You require a castclass something. When you get to the line of code that does an castclass, the CLR tries to make the cast. But then, what would happen had I ommited the castclass string line and tried to run the code?
  2. You don't require a castclass. As all reference types have a similar internal structure, if you try to use a string on an Form instance, it will throw an exception of wrong usage (because it detects a Form is not a string or any of its subtypes).

Also, is the following statamente from C# 4.0 in a Nutshell correct?

Upcasting and downcasting between compatible reference types performs reference
conversions: a new reference is created that points to the same object.

Does it really create a new reference? I thought it'd be the same reference, only stored in a different type of variable.

Thanks

© Stack Overflow or respective owner

Related posts about clr

Related posts about c#