Search Results

Search found 4993 results on 200 pages for 'conversion operator'.

Page 79/200 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Calling different functions depending on the template parameter c++

    - by Noman Javed
    I want to have something like that class A { public: Array& operator()() { . . . } }; class B { public: Element& operator[](int i) { ... } }; template<class T> class execute { public: output_type = operator()(T& t) { if(T == A) Array out = T()(); else { Array res; for(int i=0 ; i < length; ++i) a[i] = t[i]; } } }; There are two issues here 1. Meta-function replacing if-else in the execute operator() 2. Return type of execute operator() Thanks in anticipation Noman

    Read the article

  • Why does the right-shift operator produce a zero instead of a one?

    - by mrt181
    Hi, i am teaching myself java and i work through the exercises in Thinking in Java. On page 116, exercise 11, you should right-shift an integer through all its binary positions and display each position with Integer.toBinaryString. public static void main(String[] args) { int i = 8; System.out.println(Integer.toBinaryString(i)); int maxIterations = Integer.toBinaryString(i).length(); int j; for (j = 1; j < maxIterations; j++) { i >>= 1; System.out.println(Integer.toBinaryString(i)); } In the solution guide the output looks like this: 1000 1100 1110 1111 When i run this code i get this: 1000 100 10 1 What is going on here. Are the digits cut off? I am using jdk1.6.0_20 64bit. The book uses jdk1.5 32bit.

    Read the article

  • insert ... select with divide operator in select errors?

    - by Mark
    Hi, the following query CREATE TABLE IF NOT EXISTS XY ( x INT NOT NULL , y FLOAT NULL , PRIMARY KEY(x) ) INSERT INTO XY (x,y) (select 1 as x ,(1/7) as y); errors with Error code 1064, SQL state 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INSERT INTO XY (x,y) (select 1 as x ,(1/7) as y)' at line 7 Line 1, column 1 any ideas?

    Read the article

  • Document conversion and viewing, what are the cutting edge solutions?

    - by DigitalLawyer
    Goal: building a web application where a user can: Upload a document (doc, docx, pdf, additional office formats a +) View that document in a browser, preferably in html Download the document (in doc, pdf, additional open formats a +) Current solution: Ruby on Rails Application on Rackspace Users can upload doc and pdf files (AWS) Files can be downloaded in the format in which they were uploaded Thumbnail generation ([doc, pdf] - pdf - png) is done through AbiWord. Certain doc files do not convert well. Documents can be viewed in embedded Google docs viewer (https://docs.google.com/viewer). Certain doc files cannot be displayed. Little flexibility. Potential improvements: Document viewing in pdf through pdf.js Viewing in html (+ annotation) through Crocodoc I'd be glad to hear other users' experiences, and will add good recommendations to this list.

    Read the article

  • Is there a .NET class that represents operator types?

    - by user323774
    I would like to do the following: *OperatorType* o = *OperatorType*.GreaterThan; int i = 50; int increment = -1; int l = 0; for(i; i o l; i = i + increment) { //code } this concept can be kludged in javascript using an eval()... but this idea is to have a loop that can go forward or backward based on values set at runtime. is this possible? Thanks

    Read the article

  • How to skip an empty LIKE operator in a multiple LIKE query?

    - by alex
    I notice my query doesn't behave correctly if one of the like variables is empty: SELECT name FROM employee WHERE name LIKE '%a%' AND color LIKE '%A%' AND city LIKE '%b%' AND country LIKE '%B%' AND sport LIKE '%c%' AND hobby LIKE '%C%' Now when a and A are not empty it works but when a, A and c are not empty the c part is not excuted so it seems? How can I fix this?

    Read the article

  • Why does the assignment operator return a value and not a reference?

    - by Nick Lowman
    I saw the example below explained on this site and thought both answers would be 20 and not the 10 that is returned. He wrote that both the comma and assignment returns a value, not a reference. I don't quite understand what that means. I understand it in relation to passing variables into functions or methods i.e primitive types are passed in by value and objects by reference but I'm not sure how it applies in this case. I also understand about context and the value of 'this' (after help from stackoverflow) but I thought in both cases I would still be invoking it as a method, foo.bar() which would mean foo is the context but it seems both result in a function call bar(). Why is that and what does it all mean? var x = 10; var foo = { x: 20, bar: function () {return this.x;} }; (foo.bar = foo.bar)();//returns 10 (foo.bar, foo.bar)();//returns 10

    Read the article

  • why no implicit conversion from pointer to reference to const pointer.

    - by user316606
    I'll illustrate my question with code: #include <iostream> void PrintInt(const unsigned char*& ptr) { int data = 0; ::memcpy(&data, ptr, sizeof(data)); // advance the pointer reference. ptr += sizeof(data); std::cout << std::hex << data << " " << std::endl; } int main(int, char**) { unsigned char buffer[] = { 0x11, 0x11, 0x11, 0x11, 0x22, 0x22, 0x22, 0x22, }; /* const */ unsigned char* ptr = buffer; PrintInt(ptr); // error C2664: ... PrintInt(ptr); // error C2664: ... return 0; } When I run this code (in VS2008) I get this: error C2664: 'PrintInt' : cannot convert parameter 1 from 'unsigned char *' to 'const unsigned char *&'. If I uncomment the "const" comment it works fine. However shouldn't pointer implicitly convert into const pointer and then reference be taken? Am I wrong in expecting this to work? Thanks!

    Read the article

  • is the + in += on a Map a prefix operator of =?

    - by Steve
    In the book "Programming in Scala" from Martin Odersky there is a simple example in the first chapter: var capital = Map("US" -> "Washington", "France" -> "Paris") capital += ("Japan" -> "Tokyo") The second line can also be written as capital = capital + ("Japan" -> "Tokyo") I am curious about the += notation. In the class Map, I didn't found a += method. I was able to the same behaviour in an own example like class Foo() { def +(value:String) = { println(value) this } } object Main { def main(args: Array[String]) = { var foo = new Foo() foo = foo + "bar" foo += "bar" } } I am questioning myself, why the += notation is possible. It doesn't work if the method in the class Foo is called test for example. This lead me to the prefix notation. Is the + a prefix notation for the assignment sign (=)? Can somebody explain this behaviour?

    Read the article

  • Javascript === vs == : Does it matter which "equal" operator I use?

    - by bcasp
    I'm using JSLint to go through some horrific JavaScript at work and it's returning a huge number of suggestions to replace == with === when doing things like comparing 'idSele_UNVEHtype.value.length == 0' inside of an if statement. I'm basically wondering if there is a performance benefit to replacing == with ===. Any performance improvement would probably be welcomed as there are hundreds (if not thousands) of these comparison operators being used throughout the file. I tried searching for relevant information to this question, but trying to search for something like '=== vs ==' doesn't seem to work so well with search engines...

    Read the article

  • Why would you use the ternary operator without assigning a value for the "true" condition?

    - by RickNotFred
    In the Android open-source qemu code I ran across this line of code: machine->max_cpus = machine->max_cpus ?: 1; /* Default to UP */ It this just a confusing way of saying: if (machine->max_cpus) { ; //do nothing } else { machine->max_cpus = 1; } If so, wouldn't it be clearer as: if (machine->max_cpus == 0) machine->max_cpus = 1; Interestingly, this compiles and works fine with gcc, but doesn't compile on http://www.comeaucomputing.com/tryitout/ .

    Read the article

  • C# performance of static string[] contains() (slooooow) vs. == operator

    - by Andrew White
    Hiya, Just a quick query: I had a piece of code which compared a string against a long list of values, e.g. if(str == "string1" || str = "string2" || str == "string3" || str = "string4". DoSomething(); And the interest of code clarity and maintainability I changed it to public static string[] strValues = { "String1", "String2", "String3", "String4"}; ... if(strValues.Contains(str) DoSomething(); Only to find the code execution time went from 2.5secs to 6.8secs (executed ca. 200,000 times). I certainly understand a slight performance trade off, but 300%? Anyway I could define the static strings differently to enhance performance? Cheers.

    Read the article

  • Is it bad practice to make an iterator that is aware of its own end

    - by aaronman
    For some background of why I am asking this question here is an example. In python the method chain chains an arbitrary number of ranges together and makes them into one without making copies. Here is a link in case you don't understand it. I decided I would implement chain in c++ using variadic templates. As far as I can tell the only way to make an iterator for chain that will successfully go to the next container is for each iterator to to know about the end of the container (I thought of a sort of hack in where when != is called against the end it will know to go to the next container, but the first way seemed easier and safer and more versatile). My question is if there is anything inherently wrong with an iterator knowing about its own end, my code is in c++ but this can be language agnostic since many languages have iterators. #ifndef CHAIN_HPP #define CHAIN_HPP #include "iterator_range.hpp" namespace iter { template <typename ... Containers> struct chain_iter; template <typename Container> struct chain_iter<Container> { private: using Iterator = decltype(((Container*)nullptr)->begin()); Iterator begin; const Iterator end;//never really used but kept it for consistency public: chain_iter(Container & container, bool is_end=false) : begin(container.begin()),end(container.end()) { if(is_end) begin = container.end(); } chain_iter & operator++() { ++begin; return *this; } auto operator*()->decltype(*begin) { return *begin; } bool operator!=(const chain_iter & rhs) const{ return this->begin != rhs.begin; } }; template <typename Container, typename ... Containers> struct chain_iter<Container,Containers...> { private: using Iterator = decltype(((Container*)nullptr)->begin()); Iterator begin; const Iterator end; bool end_reached = false; chain_iter<Containers...> next_iter; public: chain_iter(Container & container, Containers& ... rest, bool is_end=false) : begin(container.begin()), end(container.end()), next_iter(rest...,is_end) { if(is_end) begin = container.end(); } chain_iter & operator++() { if (begin == end) { ++next_iter; } else { ++begin; } return *this; } auto operator*()->decltype(*begin) { if (begin == end) { return *next_iter; } else { return *begin; } } bool operator !=(const chain_iter & rhs) const { if (begin == end) { return this->next_iter != rhs.next_iter; } else return this->begin != rhs.begin; } }; template <typename ... Containers> iterator_range<chain_iter<Containers...>> chain(Containers& ... containers) { auto begin = chain_iter<Containers...>(containers...); auto end = chain_iter<Containers...>(containers...,true); return iterator_range<chain_iter<Containers...>>(begin,end); } } #endif //CHAIN_HPP

    Read the article

  • New features of C# 4.0

    This article covers New features of C# 4.0. Article has been divided into below sections. Introduction. Dynamic Lookup. Named and Optional Arguments. Features for COM interop. Variance. Relationship with Visual Basic. Resources. Other interested readings… 22 New Features of Visual Studio 2008 for .NET Professionals 50 New Features of SQL Server 2008 IIS 7.0 New features Introduction It is now close to a year since Microsoft Visual C# 3.0 shipped as part of Visual Studio 2008. In the VS Managed Languages team we are hard at work on creating the next version of the language (with the unsurprising working title of C# 4.0), and this document is a first public description of the planned language features as we currently see them. Please be advised that all this is in early stages of production and is subject to change. Part of the reason for sharing our plans in public so early is precisely to get the kind of feedback that will cause us to improve the final product before it rolls out. Simultaneously with the publication of this whitepaper, a first public CTP (community technology preview) of Visual Studio 2010 is going out as a Virtual PC image for everyone to try. Please use it to play and experiment with the features, and let us know of any thoughts you have. We ask for your understanding and patience working with very early bits, where especially new or newly implemented features do not have the quality or stability of a final product. The aim of the CTP is not to give you a productive work environment but to give you the best possible impression of what we are working on for the next release. The CTP contains a number of walkthroughs, some of which highlight the new language features of C# 4.0. Those are excellent for getting a hands-on guided tour through the details of some common scenarios for the features. You may consider this whitepaper a companion document to these walkthroughs, complementing them with a focus on the overall language features and how they work, as opposed to the specifics of the concrete scenarios. C# 4.0 The major theme for C# 4.0 is dynamic programming. Increasingly, objects are “dynamic” in the sense that their structure and behavior is not captured by a static type, or at least not one that the compiler knows about when compiling your program. Some examples include a. objects from dynamic programming languages, such as Python or Ruby b. COM objects accessed through IDispatch c. ordinary .NET types accessed through reflection d. objects with changing structure, such as HTML DOM objects While C# remains a statically typed language, we aim to vastly improve the interaction with such objects. A secondary theme is co-evolution with Visual Basic. Going forward we will aim to maintain the individual character of each language, but at the same time important new features should be introduced in both languages at the same time. They should be differentiated more by style and feel than by feature set. The new features in C# 4.0 fall into four groups: Dynamic lookup Dynamic lookup allows you to write method, operator and indexer calls, property and field accesses, and even object invocations which bypass the C# static type checking and instead gets resolved at runtime. Named and optional parameters Parameters in C# can now be specified as optional by providing a default value for them in a member declaration. When the member is invoked, optional arguments can be omitted. Furthermore, any argument can be passed by parameter name instead of position. COM specific interop features Dynamic lookup as well as named and optional parameters both help making programming against COM less painful than today. On top of that, however, we are adding a number of other small features that further improve the interop experience. Variance It used to be that an IEnumerable<string> wasn’t an IEnumerable<object>. Now it is – C# embraces type safe “co-and contravariance” and common BCL types are updated to take advantage of that. Dynamic Lookup Dynamic lookup allows you a unified approach to invoking things dynamically. With dynamic lookup, when you have an object in your hand you do not need to worry about whether it comes from COM, IronPython, the HTML DOM or reflection; you just apply operations to it and leave it to the runtime to figure out what exactly those operations mean for that particular object. This affords you enormous flexibility, and can greatly simplify your code, but it does come with a significant drawback: Static typing is not maintained for these operations. A dynamic object is assumed at compile time to support any operation, and only at runtime will you get an error if it wasn’t so. Oftentimes this will be no loss, because the object wouldn’t have a static type anyway, in other cases it is a tradeoff between brevity and safety. In order to facilitate this tradeoff, it is a design goal of C# to allow you to opt in or opt out of dynamic behavior on every single call. The dynamic type C# 4.0 introduces a new static type called dynamic. When you have an object of type dynamic you can “do things to it” that are resolved only at runtime: dynamic d = GetDynamicObject(…); d.M(7); The C# compiler allows you to call a method with any name and any arguments on d because it is of type dynamic. At runtime the actual object that d refers to will be examined to determine what it means to “call M with an int” on it. The type dynamic can be thought of as a special version of the type object, which signals that the object can be used dynamically. It is easy to opt in or out of dynamic behavior: any object can be implicitly converted to dynamic, “suspending belief” until runtime. Conversely, there is an “assignment conversion” from dynamic to any other type, which allows implicit conversion in assignment-like constructs: dynamic d = 7; // implicit conversion int i = d; // assignment conversion Dynamic operations Not only method calls, but also field and property accesses, indexer and operator calls and even delegate invocations can be dispatched dynamically: dynamic d = GetDynamicObject(…); d.M(7); // calling methods d.f = d.P; // getting and settings fields and properties d[“one”] = d[“two”]; // getting and setting thorugh indexers int i = d + 3; // calling operators string s = d(5,7); // invoking as a delegate The role of the C# compiler here is simply to package up the necessary information about “what is being done to d”, so that the runtime can pick it up and determine what the exact meaning of it is given an actual object d. Think of it as deferring part of the compiler’s job to runtime. The result of any dynamic operation is itself of type dynamic. Runtime lookup At runtime a dynamic operation is dispatched according to the nature of its target object d: COM objects If d is a COM object, the operation is dispatched dynamically through COM IDispatch. This allows calling to COM types that don’t have a Primary Interop Assembly (PIA), and relying on COM features that don’t have a counterpart in C#, such as indexed properties and default properties. Dynamic objects If d implements the interface IDynamicObject d itself is asked to perform the operation. Thus by implementing IDynamicObject a type can completely redefine the meaning of dynamic operations. This is used intensively by dynamic languages such as IronPython and IronRuby to implement their own dynamic object models. It will also be used by APIs, e.g. by the HTML DOM to allow direct access to the object’s properties using property syntax. Plain objects Otherwise d is a standard .NET object, and the operation will be dispatched using reflection on its type and a C# “runtime binder” which implements C#’s lookup and overload resolution semantics at runtime. This is essentially a part of the C# compiler running as a runtime component to “finish the work” on dynamic operations that was deferred by the static compiler. Example Assume the following code: dynamic d1 = new Foo(); dynamic d2 = new Bar(); string s; d1.M(s, d2, 3, null); Because the receiver of the call to M is dynamic, the C# compiler does not try to resolve the meaning of the call. Instead it stashes away information for the runtime about the call. This information (often referred to as the “payload”) is essentially equivalent to: “Perform an instance method call of M with the following arguments: 1. a string 2. a dynamic 3. a literal int 3 4. a literal object null” At runtime, assume that the actual type Foo of d1 is not a COM type and does not implement IDynamicObject. In this case the C# runtime binder picks up to finish the overload resolution job based on runtime type information, proceeding as follows: 1. Reflection is used to obtain the actual runtime types of the two objects, d1 and d2, that did not have a static type (or rather had the static type dynamic). The result is Foo for d1 and Bar for d2. 2. Method lookup and overload resolution is performed on the type Foo with the call M(string,Bar,3,null) using ordinary C# semantics. 3. If the method is found it is invoked; otherwise a runtime exception is thrown. Overload resolution with dynamic arguments Even if the receiver of a method call is of a static type, overload resolution can still happen at runtime. This can happen if one or more of the arguments have the type dynamic: Foo foo = new Foo(); dynamic d = new Bar(); var result = foo.M(d); The C# runtime binder will choose between the statically known overloads of M on Foo, based on the runtime type of d, namely Bar. The result is again of type dynamic. The Dynamic Language Runtime An important component in the underlying implementation of dynamic lookup is the Dynamic Language Runtime (DLR), which is a new API in .NET 4.0. The DLR provides most of the infrastructure behind not only C# dynamic lookup but also the implementation of several dynamic programming languages on .NET, such as IronPython and IronRuby. Through this common infrastructure a high degree of interoperability is ensured, but just as importantly the DLR provides excellent caching mechanisms which serve to greatly enhance the efficiency of runtime dispatch. To the user of dynamic lookup in C#, the DLR is invisible except for the improved efficiency. However, if you want to implement your own dynamically dispatched objects, the IDynamicObject interface allows you to interoperate with the DLR and plug in your own behavior. This is a rather advanced task, which requires you to understand a good deal more about the inner workings of the DLR. For API writers, however, it can definitely be worth the trouble in order to vastly improve the usability of e.g. a library representing an inherently dynamic domain. Open issues There are a few limitations and things that might work differently than you would expect. · The DLR allows objects to be created from objects that represent classes. However, the current implementation of C# doesn’t have syntax to support this. · Dynamic lookup will not be able to find extension methods. Whether extension methods apply or not depends on the static context of the call (i.e. which using clauses occur), and this context information is not currently kept as part of the payload. · Anonymous functions (i.e. lambda expressions) cannot appear as arguments to a dynamic method call. The compiler cannot bind (i.e. “understand”) an anonymous function without knowing what type it is converted to. One consequence of these limitations is that you cannot easily use LINQ queries over dynamic objects: dynamic collection = …; var result = collection.Select(e => e + 5); If the Select method is an extension method, dynamic lookup will not find it. Even if it is an instance method, the above does not compile, because a lambda expression cannot be passed as an argument to a dynamic operation. There are no plans to address these limitations in C# 4.0. Named and Optional Arguments Named and optional parameters are really two distinct features, but are often useful together. Optional parameters allow you to omit arguments to member invocations, whereas named arguments is a way to provide an argument using the name of the corresponding parameter instead of relying on its position in the parameter list. Some APIs, most notably COM interfaces such as the Office automation APIs, are written specifically with named and optional parameters in mind. Up until now it has been very painful to call into these APIs from C#, with sometimes as many as thirty arguments having to be explicitly passed, most of which have reasonable default values and could be omitted. Even in APIs for .NET however you sometimes find yourself compelled to write many overloads of a method with different combinations of parameters, in order to provide maximum usability to the callers. Optional parameters are a useful alternative for these situations. Optional parameters A parameter is declared optional simply by providing a default value for it: public void M(int x, int y = 5, int z = 7); Here y and z are optional parameters and can be omitted in calls: M(1, 2, 3); // ordinary call of M M(1, 2); // omitting z – equivalent to M(1, 2, 7) M(1); // omitting both y and z – equivalent to M(1, 5, 7) Named and optional arguments C# 4.0 does not permit you to omit arguments between commas as in M(1,,3). This could lead to highly unreadable comma-counting code. Instead any argument can be passed by name. Thus if you want to omit only y from a call of M you can write: M(1, z: 3); // passing z by name or M(x: 1, z: 3); // passing both x and z by name or even M(z: 3, x: 1); // reversing the order of arguments All forms are equivalent, except that arguments are always evaluated in the order they appear, so in the last example the 3 is evaluated before the 1. Optional and named arguments can be used not only with methods but also with indexers and constructors. Overload resolution Named and optional arguments affect overload resolution, but the changes are relatively simple: A signature is applicable if all its parameters are either optional or have exactly one corresponding argument (by name or position) in the call which is convertible to the parameter type. Betterness rules on conversions are only applied for arguments that are explicitly given – omitted optional arguments are ignored for betterness purposes. If two signatures are equally good, one that does not omit optional parameters is preferred. M(string s, int i = 1); M(object o); M(int i, string s = “Hello”); M(int i); M(5); Given these overloads, we can see the working of the rules above. M(string,int) is not applicable because 5 doesn’t convert to string. M(int,string) is applicable because its second parameter is optional, and so, obviously are M(object) and M(int). M(int,string) and M(int) are both better than M(object) because the conversion from 5 to int is better than the conversion from 5 to object. Finally M(int) is better than M(int,string) because no optional arguments are omitted. Thus the method that gets called is M(int). Features for COM interop Dynamic lookup as well as named and optional parameters greatly improve the experience of interoperating with COM APIs such as the Office Automation APIs. In order to remove even more of the speed bumps, a couple of small COM-specific features are also added to C# 4.0. Dynamic import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object from context, but explicitly has to perform a cast on the returned value to make use of that knowledge. These casts are so common that they constitute a major nuisance. In order to facilitate a smoother experience, you can now choose to import these COM APIs in such a way that variants are instead represented using the type dynamic. In other words, from your point of view, COM signatures now have occurrences of dynamic instead of object in them. This means that you can easily access members directly off a returned object, or you can assign it to a strongly typed local variable without having to cast. To illustrate, you can now say excel.Cells[1, 1].Value = "Hello"; instead of ((Excel.Range)excel.Cells[1, 1]).Value2 = "Hello"; and Excel.Range range = excel.Cells[1, 1]; instead of Excel.Range range = (Excel.Range)excel.Cells[1, 1]; Compiling without PIAs Primary Interop Assemblies are large .NET assemblies generated from COM interfaces to facilitate strongly typed interoperability. They provide great support at design time, where your experience of the interop is as good as if the types where really defined in .NET. However, at runtime these large assemblies can easily bloat your program, and also cause versioning issues because they are distributed independently of your application. The no-PIA feature allows you to continue to use PIAs at design time without having them around at runtime. Instead, the C# compiler will bake the small part of the PIA that a program actually uses directly into its assembly. At runtime the PIA does not have to be loaded. Omitting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. Contrary to refs in C#, these are typically not meant to mutate a passed-in argument for the subsequent benefit of the caller, but are simply another way of passing value parameters. It therefore seems unreasonable that a C# programmer should have to create temporary variables for all such ref parameters and pass these by reference. Instead, specifically for COM methods, the C# compiler will allow you to pass arguments by value to such a method, and will automatically generate temporary variables to hold the passed-in values, subsequently discarding these when the call returns. In this way the caller sees value semantics, and will not experience any side effects, but the called method still gets a reference. Open issues A few COM interface features still are not surfaced in C#. Most notably these include indexed properties and default properties. As mentioned above these will be respected if you access COM dynamically, but statically typed C# code will still not recognize them. There are currently no plans to address these remaining speed bumps in C# 4.0. Variance An aspect of generics that often comes across as surprising is that the following is illegal: IList<string> strings = new List<string>(); IList<object> objects = strings; The second assignment is disallowed because strings does not have the same element type as objects. There is a perfectly good reason for this. If it were allowed you could write: objects[0] = 5; string s = strings[0]; Allowing an int to be inserted into a list of strings and subsequently extracted as a string. This would be a breach of type safety. However, there are certain interfaces where the above cannot occur, notably where there is no way to insert an object into the collection. Such an interface is IEnumerable<T>. If instead you say: IEnumerable<object> objects = strings; There is no way we can put the wrong kind of thing into strings through objects, because objects doesn’t have a method that takes an element in. Variance is about allowing assignments such as this in cases where it is safe. The result is that a lot of situations that were previously surprising now just work. Covariance In .NET 4.0 the IEnumerable<T> interface will be declared in the following way: public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IEnumerator { bool MoveNext(); T Current { get; } } The “out” in these declarations signifies that the T can only occur in output position in the interface – the compiler will complain otherwise. In return for this restriction, the interface becomes “covariant” in T, which means that an IEnumerable<A> is considered an IEnumerable<B> if A has a reference conversion to B. As a result, any sequence of strings is also e.g. a sequence of objects. This is useful e.g. in many LINQ methods. Using the declarations above: var result = strings.Union(objects); // succeeds with an IEnumerable<object> This would previously have been disallowed, and you would have had to to some cumbersome wrapping to get the two sequences to have the same element type. Contravariance Type parameters can also have an “in” modifier, restricting them to occur only in input positions. An example is IComparer<T>: public interface IComparer<in T> { public int Compare(T left, T right); } The somewhat baffling result is that an IComparer<object> can in fact be considered an IComparer<string>! It makes sense when you think about it: If a comparer can compare any two objects, it can certainly also compare two strings. This property is referred to as contravariance. A generic type can have both in and out modifiers on its type parameters, as is the case with the Func<…> delegate types: public delegate TResult Func<in TArg, out TResult>(TArg arg); Obviously the argument only ever comes in, and the result only ever comes out. Therefore a Func<object,string> can in fact be used as a Func<string,object>. Limitations Variant type parameters can only be declared on interfaces and delegate types, due to a restriction in the CLR. Variance only applies when there is a reference conversion between the type arguments. For instance, an IEnumerable<int> is not an IEnumerable<object> because the conversion from int to object is a boxing conversion, not a reference conversion. Also please note that the CTP does not contain the new versions of the .NET types mentioned above. In order to experiment with variance you have to declare your own variant interfaces and delegate types. COM Example Here is a larger Office automation example that shows many of the new C# features in action. using System; using System.Diagnostics; using System.Linq; using Excel = Microsoft.Office.Interop.Excel; using Word = Microsoft.Office.Interop.Word; class Program { static void Main(string[] args) { var excel = new Excel.Application(); excel.Visible = true; excel.Workbooks.Add(); // optional arguments omitted excel.Cells[1, 1].Value = "Process Name"; // no casts; Value dynamically excel.Cells[1, 2].Value = "Memory Usage"; // accessed var processes = Process.GetProcesses() .OrderByDescending(p =&gt; p.WorkingSet) .Take(10); int i = 2; foreach (var p in processes) { excel.Cells[i, 1].Value = p.ProcessName; // no casts excel.Cells[i, 2].Value = p.WorkingSet; // no casts i++; } Excel.Range range = excel.Cells[1, 1]; // no casts Excel.Chart chart = excel.ActiveWorkbook.Charts. Add(After: excel.ActiveSheet); // named and optional arguments chart.ChartWizard( Source: range.CurrentRegion, Title: "Memory Usage in " + Environment.MachineName); //named+optional chart.ChartStyle = 45; chart.CopyPicture(Excel.XlPictureAppearance.xlScreen, Excel.XlCopyPictureFormat.xlBitmap, Excel.XlPictureAppearance.xlScreen); var word = new Word.Application(); word.Visible = true; word.Documents.Add(); // optional arguments word.Selection.Paste(); } } The code is much more terse and readable than the C# 3.0 counterpart. Note especially how the Value property is accessed dynamically. This is actually an indexed property, i.e. a property that takes an argument; something which C# does not understand. However the argument is optional. Since the access is dynamic, it goes through the runtime COM binder which knows to substitute the default value and call the indexed property. Thus, dynamic COM allows you to avoid accesses to the puzzling Value2 property of Excel ranges. Relationship with Visual Basic A number of the features introduced to C# 4.0 already exist or will be introduced in some form or other in Visual Basic: · Late binding in VB is similar in many ways to dynamic lookup in C#, and can be expected to make more use of the DLR in the future, leading to further parity with C#. · Named and optional arguments have been part of Visual Basic for a long time, and the C# version of the feature is explicitly engineered with maximal VB interoperability in mind. · NoPIA and variance are both being introduced to VB and C# at the same time. VB in turn is adding a number of features that have hitherto been a mainstay of C#. As a result future versions of C# and VB will have much better feature parity, for the benefit of everyone. Resources All available resources concerning C# 4.0 can be accessed through the C# Dev Center. Specifically, this white paper and other resources can be found at the Code Gallery site. Enjoy! span.fullpost {display:none;}

    Read the article

  • Objective-C scanf spaces issue

    - by Rob
    I am learning objective-C and for the life of me can't figure out why this is happening. When the user inputs when the code is: scanf("%c %lf", &operator, &number); For some reason it messes with this code: doQuit = 0; [deskCalc setAccumulator: 0]; while (doQuit == 0) { NSLog(@"Please input an operation and then a number:"); scanf("%c %lf", &operator, &number); switch (operator) { case '+': [deskCalc add: number]; NSLog (@"%lf", [deskCalc accumulator]); break; case '-': [deskCalc subtract: number]; NSLog (@"%lf", [deskCalc accumulator]); break; case '*': case 'x': [deskCalc multiply: number]; NSLog (@"%lf", [deskCalc accumulator]); break; case '/': if (number == 0) NSLog(@"You can't divide by zero."); else [deskCalc divide: number]; NSLog (@"%lf", [deskCalc accumulator]); break; case 'S': [deskCalc setAccumulator: number]; NSLog (@"%lf", [deskCalc accumulator]); break; case 'E': doQuit = 1; break; default: NSLog(@"You did not enter a valid operator."); break; } } When the user inputs for example "E 10" it will exit the loop but it will also print "You did not enter a valid operator." When I change the code to: scanf(" %c %lf", &operator, &number); It all of a sudden doesn't print this last line. What is it about the space before %c that fixes this?

    Read the article

  • Why we can't we overload "=" using friend function?

    - by ashish-sangwan
    Why it is not allowed to overload "=" using friend function? I have written a small program but it is giving error. class comp { int real; int imaginary; public: comp(){real=0; imaginary=0;} void show(){cout << "Real="<<real<<" Imaginary="<<imaginary<<endl;} void set(int i,int j){real=i;imaginary=j;} friend comp operator=(comp &op1,const comp &op2); }; comp operator=(comp &op1,const comp &op2) { op1.imaginary=op2.imaginary; op1.real=op2.real; return op1; } int main() { comp a,b; a.set(10,20); b=a; b.show(); return 0; } The compilation gives the following error :- [root@dogmatix stackoverflow]# g++ prog4.cpp prog4.cpp:11: error: ‘comp operator=(comp&, const comp&)’ must be a nonstatic member function prog4.cpp:14: error: ‘comp operator=(comp&, const comp&)’ must be a nonstatic member function prog4.cpp: In function ‘int main()’: prog4.cpp:25: error: ambiguous overload for ‘operator=’ in ‘b = a’ prog4.cpp:4: note: candidates are: comp& comp::operator=(const comp&) prog4.cpp:14: note: comp operator=(comp&, const comp&)

    Read the article

  • C++ overloading comparative operators for a MyString class

    - by Taylor Gang
    bool operator == (const MyString& left, const MyString& right) { if(left.value == right.value) return true; else return false; } bool operator != (const MyString& left, const MyString& right) { if(left == right) return false; else return true; } bool operator < (const MyString& left, const MyString& right) { if(strcmp(left.value, right.value) == -1) return true; else return false; } bool operator > (const MyString& left, const MyString& right) { if(strcmp(left.value, right.value) == 1) return true; else return false; } bool operator <= (const MyString& left, const MyString& right) { if(strcmp(left.value, right.value) == -1 || strcmp(left.value, right.value) == 0) return true; else return false; } bool operator >= (const MyString& left, const MyString& right) { if(strcmp(left.value, right.value) == 1 || strcmp(left.value, right.value) == 0) return true; else return false; } So these are my implemented comparison operators for my MyString class, they fail the test program that my professor gave me and could use some direction. Thanks in advance for any and all help I receive.

    Read the article

  • C++ unrestricted union workaround

    - by Chris
    #include <stdio.h> struct B { int x,y; }; struct A : public B { // This whines about "copy assignment operator not allowed in union" //A& operator =(const A& a) { printf("A=A should do the exact same thing as A=B\n"); } A& operator =(const B& b) { printf("A = B\n"); } }; union U { A a; B b; }; int main(int argc, const char* argv[]) { U u1, u2; u1.a = u2.b; // You can do this and it calls the operator = u1.a = (B)u2.a; // This works too u1.a = u2.a; // This calls the default assignment operator >:@ } Is there any workaround to be able to do that last line u1.a = u2.a with the exact same syntax, but have it call the operator = (don't care if it's =(B&) or =(A&)) instead of just copying data? Or are unrestricted unions (not supported even in Visual Studio 2010) the only option?

    Read the article

  • Perform Unit Conversions with the Windows 7 Calculator

    - by Matthew Guay
    Want to easily convert area, volume, temperature, and many other units?  With the Calculator in Windows 7, it’s easy to convert most any unit into another. The New Calculator in Windows 7 Calculator received a visual overhaul in Windows 7, but at first glance it doesn’t seem to have any new functionality.  Here’s Windows 7’s Calculator on the left, with Vista’s calculator on the right.   But, looks can be deceiving.  Window’s 7’s calculator has lots of new exciting features.  Let’s try them out.  Simply type Calculator in the start menu search. To uncover the new features, click the View menu.  Here you can select many different modes, including Unit Conversion mode which we will look at. When you select the Unit Conversion mode, the Calculator will expand with a form on the left side. This conversions pane has 3 drop-down menus.  From the top one, select the type of unit you want to convert. In the next two menus, select which values you wish to convert to and from.  For instance, here we selected Temperature in the first menu, Degrees Fahrenheit in the second menu, and Degrees Celsius in the third menu. Enter the value you wish to convert in the From box, and the conversion will automatically appear in the bottom box. The Calculator contains dozens of conversion values, including more uncommon ones.  So if you’ve ever wanted to know how many US gallons are in a UK gallon, or how many knots a supersonic jet travels in an hour, this is a great tool for you!   Conclusion Windows 7 is filled with little changes that give you an all-around better experience in Windows to help you work more efficiently and productively.  With the new features in the Calculator, you just might feel a little smarter, too! Similar Articles Productive Geek Tips Add Windows Calculator to the Excel 2007 Quick Launch ToolbarEnjoy Quick & Easy Unit Conversion with Convert for WindowsCalculate with Qalculate on LinuxDisable the Annoying “This device can perform faster” Balloon Message in Windows 7Get stats on your Ruby on Rails code TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad

    Read the article

  • OWB 11gR2 &ndash; Degenerate Dimensions

    - by David Allan
    Ever wondered how to build degenerate dimensions in OWB and get the benefits of slowly changing dimensions and cube loading? Now its possible through some changes in 11gR2 to make the dimension and cube loading much more flexible. This will let you get the benefits of OWB's surrogate key handling and slowly changing dimension reference when loading the fact table and need degenerate dimensions (see Ralph Kimball's degenerate dimensions design tip). Here we will see how to use the cube operator to load slowly changing, regular and degenerate dimensions. The cube and cube operator can now work with dimensions which have no surrogate key as well as dimensions with surrogates, so you can get the benefit of the cube loading and incorporate the degenerate dimension loading. What you need to do is create a dimension in OWB that is purely used for ETL metadata; the dimension itself is never deployed (its table is, but has not data) it has no surrogate keys has a single level with a business attribute the degenerate dimension data and a dummy attribute, say description just to pass the OWB validation. When this degenerate dimension is added into a cube, you will need to configure the fact table created and set the 'Deployable' flag to FALSE for the foreign key generated to the degenerate dimension table. The degenerate dimension reference will then be in the cube operator and used when matching. Create the degenerate dimension using the regular wizard. Delete the Surrogate ID attribute, this is not needed. Define a level name for the dimension member (any name). After the wizard has completed, in the editor delete the hierarchy STANDARD that was automatically generated, there is only a single level, no need for a hierarchy and this shouldn't really be created. Deploy the implementing table DD_ORDERNUMBER_TAB, this needs to be deployed but with no data (the mapping here will do a left outer join of the source data with the empty degenerate dimension table). Now, go ahead and build your cube, use the regular TIMES dimension for example and your degenerate dimension DD_ORDERNUMBER, can add in SCD dimensions etc. Configure the fact table created and set Deployable to false, so the foreign key does not get generated. Can now use the cube in a mapping and load data into the fact table via the cube operator, this will look after surrogate lookups and slowly changing dimension references.   If you generate the SQL you will see the ON clause for matching includes the columns representing the degenerate dimension columns. Here we have seen how this use case for loading fact tables using degenerate dimensions becomes a whole lot simpler using OWB 11gR2. I'm sure there are other use cases where using this mix of dimensions with surrogate and regular identifiers is useful, Fact tables partitioned by date columns is another classic example that this will greatly help and make the cube operator much more useful. Good to hear any comments.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >