Covariance and Contravariance type inference in C# 4.0

Posted by devoured elysium on Stack Overflow See other posts from Stack Overflow or by devoured elysium
Published on 2010-04-28T21:55:05Z Indexed on 2010/04/28 21:57 UTC
Read the original article Hit count: 382

Filed under:
|
|
|
|

When we define our interfaces in C# 4.0, we are allowed to mark each of the generic parameters as in or out. If we try to set a generic parameter as out and that'd lead to a problem, the compiler raises an error, not allowing us to do that.

Question:

If the compiler has ways of inferring what are valid uses for both covariance (out) and contravariance(in), why do we have to mark interfaces as such? Wouldn't it be enough to just let us define the interfaces as we always did, and when we tried to use them in our client code, raise an error if we tried to use them in an un-safe way?

Example:

interface MyInterface<out T> {
    T abracadabra();
}
//works OK

interface MyInterface2<in T> {
    T abracadabra();
}
//compiler raises an error.
//This makes me think that the compiler is cappable 
//of understanding what situations might generate 
//run-time problems and then prohibits them.

Also,

isn't it what Java does in the same situation? From what I recall, you just do something like

IMyInterface<? extends whatever> myInterface; //covariance
IMyInterface<? super whatever> myInterface2; //contravariance

Or am I mixing things?

Thanks

© Stack Overflow or respective owner

Related posts about c#

Related posts about generics