Search Results

Search found 12914 results on 517 pages for 'declarative programming'.

Page 256/517 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • Windows Phone 7 v. Windows 8 Metro &ldquo;Same but Different&rdquo;

    - by ryanabr
    I have been doing development on both the Windows Phone 7 and Windows 8 Metro style applications over the past month and have really been enjoying doing both. What is great is that Silverlight is used for both development platforms. What is frustrating is the "Same but Different" nature of both platforms. Many similar services and ways of doing things are available on both platforms, but the objects, namespaces, and ways of handling certain cases are different. I almost had a heart attack when I thought that XmlDocument had been removed from the new WinRT. I was relived (but a little annoyed)  when I found out that it had shifted from the "System.Xml" namespace to the "Windows.Data.Xml.Dom" namespace. In my opinion this is worse than deprecating and reintroducing it since there isn't the lead time to know that the change is coming, maker changes and adjust. I also think the breaks the compatibility that is advertised between the WinRT and .NET framework from a programming perspective, as the code base will have to be physically different if compiled for one platform versus the other. Which brings up another issue, the need for separate DLLs with for the different platforms that contain the same C# code behind them which seems like the beginning of a code maintenance headache. Historically, I have kept source files "co-located" with the projects that they are compiled into. After doing some research, I think I will end up keeping "common" files that need to be compiled in to DLLs for the different platforms in a seperate location in TFS, not directly included in any one Visual Studio project, but added as links in the project that would get compiled into the windows 7 phone, or Windows 8. This will work fine, except for the case where dependencies don't line up for each platform as described above, but will work fine for base classes that do the raw work at the most basic programming level.

    Read the article

  • Now that Apple's intending to deprecate Java on OS X, what language should I focus on?

    - by Smalltown2000
    After getting shot down on SO, I'll try this here: I'm sure you'll all know of Apple's recent announcement to deprecate Java on OS X (such as discussed here). I've recently come back to programming in the last year or so since I originally learnt on ye olde BASIC many years ago. I have a Mac at home and a PC at work and whilst I have got Windows and Ubuntu installed on my Mac as VMs, I chose to focus my "relearning" on VB first (as it was closest to BASIC) and then rapidly moved to Java as it was cross platform (with minimal effort) and so it was easiest to work on code from both OSes. So my question, if the winds of change on Mac are blowing away from Java and in this post-Sun era, what would be the best language to focus my new efforts on? Please note, this isn't a general "which language is better?" thread and or an opportunity for the associated flame-war. There's plenty of those and it's not the point. I realise that in the long term one shouldn't be allegiant to an individual language so, taking this as an excuse, the question is specifically which is going to be the most quick to be productive on given the background whilst bearing in mind minimum portability rewrites (aspiration rather then requirement) and with a long term value of usage. To that I see the main options as: C# - Closest in "style" to Java but M$ dependent (unless you consider Mono of course) C++ - Hugely complex but if even slightly conquered, then a win? Is it worth the climb up the learning curve? VB.Net - Already have background so easiest to go back to but who uses VB for .Net these days? Surely if using a CLI language I should use C#... Python - Cross-platform but what about UI for the end-user? EDIT: As a usage priority, I envision desktop application programming. Though the ability to branch in the future is always desirable. I guess graphics are the next direction once basics are in place.

    Read the article

  • My proposed design is usually worse than my colleague's - how do I get better?

    - by user151193
    I have been programming for couple of years and am generally good when it comes to fixing problems and creating small-to-medium scripts, however, I'm generally not good at designing large scale programs in object oriented way. Few questions Recently, a colleague who has same number of years of experience as me and I were working on a problem. I was working on a problem longer than him, however, he came up with a better solution and in the end we're going to use his design. This really affected me. I admit his design is better, but I wanted to come up with a design as good as his. I'm even contemplating quitting the job. Not sure why but suddenly I feel under some pressure e.g. what would juniors think of me and etc? Is it normal? Or I'm thinking a little too much into this? My job involves programming in Python. I try to read source code but how do you think I can improve me design skills? Are there any good books or software that I should study? Please enlighten me. I will really appreciate your help.

    Read the article

  • How to learn to program [on hold]

    - by user94914
    I went to a community college and got a degree in computer science, but I found out I only learn very little about programming. As a result I landed byself a office assistant work (for a year now), I want to study on my own and apply for some internship / very entry level development job. I am wondering how should a person learn to program now? I feel that I might not doing it correctly, I understand everyone has a different approach, but I am really clueless on what to do, as it seems I am 5-10 years away. 1) Read the old college programming textbook cover to cover, learn every single concepts, do all the practice problems and master them (1-2 times until error free). Currently reading this java book 2) Work on any project, keep on googling and reading tutorials (including the books on that specific language). I have been doing 1, but the progress is really slow, about 2-5 pages / hour, over a 1000+ page book, I felt really discouraged. I have a few of them to go through (data struction, analyis algorthim, computer theory, operating system.) I wonder is this the right method to do? I know it is going to take time, but I am hoping to get some advice from current programmers.

    Read the article

  • How should a non-IT manager secure the long-term maintenance and development of essential legacy software?

    - by user105977
    I've been hunting for a place to ask this question for quite a while; maybe this is the place, although I'm afraid it's not the kind of "question with an answer" this site would prefer. We are a small, very specialized, benefits administration firm with an extremely useful, robust collection of software, some written in COBOL but most in BASIC. Two full-time consultants have ably maintained and improved this system over more than 30 years. Needless to say they will soon retire. (One of them has been desperate to retire for several years but is loyal to a fault and so hangs on despite her husband's insistence that golf should take priority.) We started down the path of converting to a system developed by one of only three firms in the country that offer the type of software we use. We now feel that although this this firm is theoretically capable of completing the conversion process, they don't have the resources to do so timely, and we have come to believe that they will be unable to offer the kind of service we need to run our business. (There's nothing like being able to set one's own priorities and having the authority to allocate one's resources as one sees fit.) Hardware is not a problem--we are able to emulate very effectively on modern servers. If COBOL and BASIC were modern languages, we'd be willing to take the risk that we could find replacements for our current consultants going forward. It seems like there ought to be a business model for an IT support firm that concentrates on legacy platforms like this and provides the programming and software development talent to support a system like ours, removing from our backs the risks of finding the right programming talent and the job of convincing younger programmers that they can have a productive, rewarding career, in part in an old, non-sexy language like BASIC. Where do I find such firms?

    Read the article

  • What is the fastest way for me to become a full stack developer? [on hold]

    - by user136368
    I run a small webdesign firm. I have an overview of HTML, CSS, JS, PHP(laravel), MySQL. I did a few courses on code academy. I wanted to build a web app in the company. I find that I am severely crippled by the lack of programming expertise. I want to become a full stack developer who can build a prototype on his own. I cannot spend 5-10k USD on the boot camp courses. Can someone suggest structured courses which can help me become a full stack developers? I found the following websites but I donot know if they can help me become one. My goal: Be able to make a working prototype of the ideas I come up with.(This is my primary goal. I do not want to be the lead developer. I just want to be able to make a prototype.) Several questions I have in mind: Will it be fine if I stick to PHP(laravel)? Should I be using ROR? I have come across a few online resources: Codeacademy, codeschool,teamtreehouse,and theodinproject. These are within my affordability range. I can commit to a 2-3 months intensively to learn programming. What do you suggest I do?

    Read the article

  • Psychology researcher wants to learn new language

    - by user273347
    I'm currently considering R, matlab, or python, but I'm open to other options. Could you help me pick the best language for my needs? Here are the criteria I have in mind (not in order): Simple to learn. I don't really have a lot of free time, so I'm looking for something that isn't extremely complicated and/or difficult to pick up. I know some C, FWIW. Good for statistics/psychometrics. I do a ton of statistics and psychometrics analysis. A lot of it is basic stuff that I can do with SPSS, but I'd like to play around with the more advanced stuff too (bootstrapping, genetic programming, data mining, neural nets, modeling, etc). I'm looking for a language/environment that can help me run my simpler analyses faster and give me more options than a canned stat package like SPSS. If it can even make tables for me, then it'll be perfect. I also do a fair bit of experimental psychology. I use a canned experiment "programming" software (SuperLab) to make most of my experiments, but I want to be able to program executable programs that I can run on any computer and that can compile the data from the experiments in a spreadsheet. I know python has psychopy and pyepl and matlab has psychtoolbox, but I don't know which one is best. If R had something like this, I'd probably be sold on R already. I'm looking for something regularly used in academe and industry. Everybody else here (including myself, so far) uses canned stat and experiment programming software. One of the reasons I'm trying to learn a programming language is so that I can keep up when I move to another lab. Looking forward to your comments and suggestions. Thank you all for your kind and informative replies. I appreciate it. It's still a tough choice because of so many strong arguments for each language. Python - Thinking about it, I've forgotten so much about C already (I don't even remember what to do with an array) that it might be better for me to start from scratch with a simple program that does what it's supposed to do. It looks like it can do most of the things I'll need it to do, though not as cleanly as R and MATLAB. R - I'm really liking what I'm reading about R. The packages are perfect for my statistical work now. Given the purpose of R, I don't think it's suited to building psychological experiments though. To clarify, what I mean is making a program that presents visual and auditory stimuli to my specifications (hundreds of them in a preset and/or randomized sequence) and records the response data gathered from participants. MATLAB - It's awesome that cognitive and neuro folk are recommending MATLAB, because I'm preparing for the big leap from social and personality psychology to cognitive neuro. The problem is the Uni where I work doesn't have MATLAB licenses (and 3750 GBP for a compiler license is not an option for me haha). Octave looks like a good alternative. PsychToolbox is compatible with Octave, thankfully. SQL - Thanks for the tip. I'll explore that option, too. Python will be the least backbreaking and most useful in the short term. R is well suited to my current work. MATLAB is well suited to my prospective work. It's a tough call, but I think I am now equipped to make a more well-informed decision about where to go next. Thanks again!

    Read the article

  • C# 5 Async, Part 1: Simplifying Asynchrony – That for which we await

    - by Reed
    Today’s announcement at PDC of the future directions C# is taking excite me greatly.  The new Visual Studio Async CTP is amazing.  Asynchronous code – code which frustrates and demoralizes even the most advanced of developers, is taking a huge leap forward in terms of usability.  This is handled by building on the Task functionality in .NET 4, as well as the addition of two new keywords being added to the C# language: async and await. This core of the new asynchronous functionality is built upon three key features.  First is the Task functionality in .NET 4, and based on Task and Task<TResult>.  While Task was intended to be the primary means of asynchronous programming with .NET 4, the .NET Framework was still based mainly on the Asynchronous Pattern and the Event-based Asynchronous Pattern. The .NET Framework added functionality and guidance for wrapping existing APIs into a Task based API, but the framework itself didn’t really adopt Task or Task<TResult> in any meaningful way.  The CTP shows that, going forward, this is changing. One of the three key new features coming in C# is actually a .NET Framework feature.  Nearly every asynchronous API in the .NET Framework has been wrapped into a new, Task-based method calls.  In the CTP, this is done via as external assembly (AsyncCtpLibrary.dll) which uses Extension Methods to wrap the existing APIs.  However, going forward, this will be handled directly within the Framework.  This will have a unifying effect throughout the .NET Framework.  This is the first building block of the new features for asynchronous programming: Going forward, all asynchronous operations will work via a method that returns Task or Task<TResult> The second key feature is the new async contextual keyword being added to the language.  The async keyword is used to declare an asynchronous function, which is a method that either returns void, a Task, or a Task<T>. Inside the asynchronous function, there must be at least one await expression.  This is a new C# keyword (await) that is used to automatically take a series of statements and break it up to potentially use discontinuous evaluation.  This is done by using await on any expression that evaluates to a Task or Task<T>. For example, suppose we want to download a webpage as a string.  There is a new method added to WebClient: Task<string> WebClient.DownloadStringTaskAsync(Uri).  Since this returns a Task<string> we can use it within an asynchronous function.  Suppose, for example, that we wanted to do something similar to my asynchronous Task example – download a web page asynchronously and check to see if it supports XHTML 1.0, then report this into a TextBox.  This could be done like so: private async void button1_Click(object sender, RoutedEventArgs e) { string url = "http://reedcopsey.com"; string content = await new WebClient().DownloadStringTaskAsync(url); this.textBox1.Text = string.Format("Page {0} supports XHTML 1.0: {1}", url, content.Contains("XHTML 1.0")); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Let’s walk through what’s happening here, step by step.  By adding the async contextual keyword to the method definition, we are able to use the await keyword on our WebClient.DownloadStringTaskAsync method call. When the user clicks this button, the new method (Task<string> WebClient.DownloadStringTaskAsync(string)) is called, which returns a Task<string>.  By adding the await keyword, the runtime will call this method that returns Task<string>, and execution will return to the caller at this point.  This means that our UI is not blocked while the webpage is downloaded.  Instead, the UI thread will “await” at this point, and let the WebClient do it’s thing asynchronously. When the WebClient finishes downloading the string, the user interface’s synchronization context will automatically be used to “pick up” where it left off, and the Task<string> returned from DownloadStringTaskAsync is automatically unwrapped and set into the content variable.  At this point, we can use that and set our text box content. There are a couple of key points here: Asynchronous functions are declared with the async keyword, and contain one or more await expressions In addition to the obvious benefits of shorter, simpler code – there are some subtle but tremendous benefits in this approach.  When the execution of this asynchronous function continues after the first await statement, the initial synchronization context is used to continue the execution of this function.  That means that we don’t have to explicitly marshal the call that sets textbox1.Text back to the UI thread – it’s handled automatically by the language and framework!  Exception handling around asynchronous method calls also just works. I’d recommend every C# developer take a look at the documentation on the new Asynchronous Programming for C# and Visual Basic page, download the Visual Studio Async CTP, and try it out.

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • New features of C# 4.0

    This article covers New features of C# 4.0. Article has been divided into below sections. Introduction. Dynamic Lookup. Named and Optional Arguments. Features for COM interop. Variance. Relationship with Visual Basic. Resources. Other interested readings… 22 New Features of Visual Studio 2008 for .NET Professionals 50 New Features of SQL Server 2008 IIS 7.0 New features Introduction It is now close to a year since Microsoft Visual C# 3.0 shipped as part of Visual Studio 2008. In the VS Managed Languages team we are hard at work on creating the next version of the language (with the unsurprising working title of C# 4.0), and this document is a first public description of the planned language features as we currently see them. Please be advised that all this is in early stages of production and is subject to change. Part of the reason for sharing our plans in public so early is precisely to get the kind of feedback that will cause us to improve the final product before it rolls out. Simultaneously with the publication of this whitepaper, a first public CTP (community technology preview) of Visual Studio 2010 is going out as a Virtual PC image for everyone to try. Please use it to play and experiment with the features, and let us know of any thoughts you have. We ask for your understanding and patience working with very early bits, where especially new or newly implemented features do not have the quality or stability of a final product. The aim of the CTP is not to give you a productive work environment but to give you the best possible impression of what we are working on for the next release. The CTP contains a number of walkthroughs, some of which highlight the new language features of C# 4.0. Those are excellent for getting a hands-on guided tour through the details of some common scenarios for the features. You may consider this whitepaper a companion document to these walkthroughs, complementing them with a focus on the overall language features and how they work, as opposed to the specifics of the concrete scenarios. C# 4.0 The major theme for C# 4.0 is dynamic programming. Increasingly, objects are “dynamic” in the sense that their structure and behavior is not captured by a static type, or at least not one that the compiler knows about when compiling your program. Some examples include a. objects from dynamic programming languages, such as Python or Ruby b. COM objects accessed through IDispatch c. ordinary .NET types accessed through reflection d. objects with changing structure, such as HTML DOM objects While C# remains a statically typed language, we aim to vastly improve the interaction with such objects. A secondary theme is co-evolution with Visual Basic. Going forward we will aim to maintain the individual character of each language, but at the same time important new features should be introduced in both languages at the same time. They should be differentiated more by style and feel than by feature set. The new features in C# 4.0 fall into four groups: Dynamic lookup Dynamic lookup allows you to write method, operator and indexer calls, property and field accesses, and even object invocations which bypass the C# static type checking and instead gets resolved at runtime. Named and optional parameters Parameters in C# can now be specified as optional by providing a default value for them in a member declaration. When the member is invoked, optional arguments can be omitted. Furthermore, any argument can be passed by parameter name instead of position. COM specific interop features Dynamic lookup as well as named and optional parameters both help making programming against COM less painful than today. On top of that, however, we are adding a number of other small features that further improve the interop experience. Variance It used to be that an IEnumerable<string> wasn’t an IEnumerable<object>. Now it is – C# embraces type safe “co-and contravariance” and common BCL types are updated to take advantage of that. Dynamic Lookup Dynamic lookup allows you a unified approach to invoking things dynamically. With dynamic lookup, when you have an object in your hand you do not need to worry about whether it comes from COM, IronPython, the HTML DOM or reflection; you just apply operations to it and leave it to the runtime to figure out what exactly those operations mean for that particular object. This affords you enormous flexibility, and can greatly simplify your code, but it does come with a significant drawback: Static typing is not maintained for these operations. A dynamic object is assumed at compile time to support any operation, and only at runtime will you get an error if it wasn’t so. Oftentimes this will be no loss, because the object wouldn’t have a static type anyway, in other cases it is a tradeoff between brevity and safety. In order to facilitate this tradeoff, it is a design goal of C# to allow you to opt in or opt out of dynamic behavior on every single call. The dynamic type C# 4.0 introduces a new static type called dynamic. When you have an object of type dynamic you can “do things to it” that are resolved only at runtime: dynamic d = GetDynamicObject(…); d.M(7); The C# compiler allows you to call a method with any name and any arguments on d because it is of type dynamic. At runtime the actual object that d refers to will be examined to determine what it means to “call M with an int” on it. The type dynamic can be thought of as a special version of the type object, which signals that the object can be used dynamically. It is easy to opt in or out of dynamic behavior: any object can be implicitly converted to dynamic, “suspending belief” until runtime. Conversely, there is an “assignment conversion” from dynamic to any other type, which allows implicit conversion in assignment-like constructs: dynamic d = 7; // implicit conversion int i = d; // assignment conversion Dynamic operations Not only method calls, but also field and property accesses, indexer and operator calls and even delegate invocations can be dispatched dynamically: dynamic d = GetDynamicObject(…); d.M(7); // calling methods d.f = d.P; // getting and settings fields and properties d[“one”] = d[“two”]; // getting and setting thorugh indexers int i = d + 3; // calling operators string s = d(5,7); // invoking as a delegate The role of the C# compiler here is simply to package up the necessary information about “what is being done to d”, so that the runtime can pick it up and determine what the exact meaning of it is given an actual object d. Think of it as deferring part of the compiler’s job to runtime. The result of any dynamic operation is itself of type dynamic. Runtime lookup At runtime a dynamic operation is dispatched according to the nature of its target object d: COM objects If d is a COM object, the operation is dispatched dynamically through COM IDispatch. This allows calling to COM types that don’t have a Primary Interop Assembly (PIA), and relying on COM features that don’t have a counterpart in C#, such as indexed properties and default properties. Dynamic objects If d implements the interface IDynamicObject d itself is asked to perform the operation. Thus by implementing IDynamicObject a type can completely redefine the meaning of dynamic operations. This is used intensively by dynamic languages such as IronPython and IronRuby to implement their own dynamic object models. It will also be used by APIs, e.g. by the HTML DOM to allow direct access to the object’s properties using property syntax. Plain objects Otherwise d is a standard .NET object, and the operation will be dispatched using reflection on its type and a C# “runtime binder” which implements C#’s lookup and overload resolution semantics at runtime. This is essentially a part of the C# compiler running as a runtime component to “finish the work” on dynamic operations that was deferred by the static compiler. Example Assume the following code: dynamic d1 = new Foo(); dynamic d2 = new Bar(); string s; d1.M(s, d2, 3, null); Because the receiver of the call to M is dynamic, the C# compiler does not try to resolve the meaning of the call. Instead it stashes away information for the runtime about the call. This information (often referred to as the “payload”) is essentially equivalent to: “Perform an instance method call of M with the following arguments: 1. a string 2. a dynamic 3. a literal int 3 4. a literal object null” At runtime, assume that the actual type Foo of d1 is not a COM type and does not implement IDynamicObject. In this case the C# runtime binder picks up to finish the overload resolution job based on runtime type information, proceeding as follows: 1. Reflection is used to obtain the actual runtime types of the two objects, d1 and d2, that did not have a static type (or rather had the static type dynamic). The result is Foo for d1 and Bar for d2. 2. Method lookup and overload resolution is performed on the type Foo with the call M(string,Bar,3,null) using ordinary C# semantics. 3. If the method is found it is invoked; otherwise a runtime exception is thrown. Overload resolution with dynamic arguments Even if the receiver of a method call is of a static type, overload resolution can still happen at runtime. This can happen if one or more of the arguments have the type dynamic: Foo foo = new Foo(); dynamic d = new Bar(); var result = foo.M(d); The C# runtime binder will choose between the statically known overloads of M on Foo, based on the runtime type of d, namely Bar. The result is again of type dynamic. The Dynamic Language Runtime An important component in the underlying implementation of dynamic lookup is the Dynamic Language Runtime (DLR), which is a new API in .NET 4.0. The DLR provides most of the infrastructure behind not only C# dynamic lookup but also the implementation of several dynamic programming languages on .NET, such as IronPython and IronRuby. Through this common infrastructure a high degree of interoperability is ensured, but just as importantly the DLR provides excellent caching mechanisms which serve to greatly enhance the efficiency of runtime dispatch. To the user of dynamic lookup in C#, the DLR is invisible except for the improved efficiency. However, if you want to implement your own dynamically dispatched objects, the IDynamicObject interface allows you to interoperate with the DLR and plug in your own behavior. This is a rather advanced task, which requires you to understand a good deal more about the inner workings of the DLR. For API writers, however, it can definitely be worth the trouble in order to vastly improve the usability of e.g. a library representing an inherently dynamic domain. Open issues There are a few limitations and things that might work differently than you would expect. · The DLR allows objects to be created from objects that represent classes. However, the current implementation of C# doesn’t have syntax to support this. · Dynamic lookup will not be able to find extension methods. Whether extension methods apply or not depends on the static context of the call (i.e. which using clauses occur), and this context information is not currently kept as part of the payload. · Anonymous functions (i.e. lambda expressions) cannot appear as arguments to a dynamic method call. The compiler cannot bind (i.e. “understand”) an anonymous function without knowing what type it is converted to. One consequence of these limitations is that you cannot easily use LINQ queries over dynamic objects: dynamic collection = …; var result = collection.Select(e => e + 5); If the Select method is an extension method, dynamic lookup will not find it. Even if it is an instance method, the above does not compile, because a lambda expression cannot be passed as an argument to a dynamic operation. There are no plans to address these limitations in C# 4.0. Named and Optional Arguments Named and optional parameters are really two distinct features, but are often useful together. Optional parameters allow you to omit arguments to member invocations, whereas named arguments is a way to provide an argument using the name of the corresponding parameter instead of relying on its position in the parameter list. Some APIs, most notably COM interfaces such as the Office automation APIs, are written specifically with named and optional parameters in mind. Up until now it has been very painful to call into these APIs from C#, with sometimes as many as thirty arguments having to be explicitly passed, most of which have reasonable default values and could be omitted. Even in APIs for .NET however you sometimes find yourself compelled to write many overloads of a method with different combinations of parameters, in order to provide maximum usability to the callers. Optional parameters are a useful alternative for these situations. Optional parameters A parameter is declared optional simply by providing a default value for it: public void M(int x, int y = 5, int z = 7); Here y and z are optional parameters and can be omitted in calls: M(1, 2, 3); // ordinary call of M M(1, 2); // omitting z – equivalent to M(1, 2, 7) M(1); // omitting both y and z – equivalent to M(1, 5, 7) Named and optional arguments C# 4.0 does not permit you to omit arguments between commas as in M(1,,3). This could lead to highly unreadable comma-counting code. Instead any argument can be passed by name. Thus if you want to omit only y from a call of M you can write: M(1, z: 3); // passing z by name or M(x: 1, z: 3); // passing both x and z by name or even M(z: 3, x: 1); // reversing the order of arguments All forms are equivalent, except that arguments are always evaluated in the order they appear, so in the last example the 3 is evaluated before the 1. Optional and named arguments can be used not only with methods but also with indexers and constructors. Overload resolution Named and optional arguments affect overload resolution, but the changes are relatively simple: A signature is applicable if all its parameters are either optional or have exactly one corresponding argument (by name or position) in the call which is convertible to the parameter type. Betterness rules on conversions are only applied for arguments that are explicitly given – omitted optional arguments are ignored for betterness purposes. If two signatures are equally good, one that does not omit optional parameters is preferred. M(string s, int i = 1); M(object o); M(int i, string s = “Hello”); M(int i); M(5); Given these overloads, we can see the working of the rules above. M(string,int) is not applicable because 5 doesn’t convert to string. M(int,string) is applicable because its second parameter is optional, and so, obviously are M(object) and M(int). M(int,string) and M(int) are both better than M(object) because the conversion from 5 to int is better than the conversion from 5 to object. Finally M(int) is better than M(int,string) because no optional arguments are omitted. Thus the method that gets called is M(int). Features for COM interop Dynamic lookup as well as named and optional parameters greatly improve the experience of interoperating with COM APIs such as the Office Automation APIs. In order to remove even more of the speed bumps, a couple of small COM-specific features are also added to C# 4.0. Dynamic import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object from context, but explicitly has to perform a cast on the returned value to make use of that knowledge. These casts are so common that they constitute a major nuisance. In order to facilitate a smoother experience, you can now choose to import these COM APIs in such a way that variants are instead represented using the type dynamic. In other words, from your point of view, COM signatures now have occurrences of dynamic instead of object in them. This means that you can easily access members directly off a returned object, or you can assign it to a strongly typed local variable without having to cast. To illustrate, you can now say excel.Cells[1, 1].Value = "Hello"; instead of ((Excel.Range)excel.Cells[1, 1]).Value2 = "Hello"; and Excel.Range range = excel.Cells[1, 1]; instead of Excel.Range range = (Excel.Range)excel.Cells[1, 1]; Compiling without PIAs Primary Interop Assemblies are large .NET assemblies generated from COM interfaces to facilitate strongly typed interoperability. They provide great support at design time, where your experience of the interop is as good as if the types where really defined in .NET. However, at runtime these large assemblies can easily bloat your program, and also cause versioning issues because they are distributed independently of your application. The no-PIA feature allows you to continue to use PIAs at design time without having them around at runtime. Instead, the C# compiler will bake the small part of the PIA that a program actually uses directly into its assembly. At runtime the PIA does not have to be loaded. Omitting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. Contrary to refs in C#, these are typically not meant to mutate a passed-in argument for the subsequent benefit of the caller, but are simply another way of passing value parameters. It therefore seems unreasonable that a C# programmer should have to create temporary variables for all such ref parameters and pass these by reference. Instead, specifically for COM methods, the C# compiler will allow you to pass arguments by value to such a method, and will automatically generate temporary variables to hold the passed-in values, subsequently discarding these when the call returns. In this way the caller sees value semantics, and will not experience any side effects, but the called method still gets a reference. Open issues A few COM interface features still are not surfaced in C#. Most notably these include indexed properties and default properties. As mentioned above these will be respected if you access COM dynamically, but statically typed C# code will still not recognize them. There are currently no plans to address these remaining speed bumps in C# 4.0. Variance An aspect of generics that often comes across as surprising is that the following is illegal: IList<string> strings = new List<string>(); IList<object> objects = strings; The second assignment is disallowed because strings does not have the same element type as objects. There is a perfectly good reason for this. If it were allowed you could write: objects[0] = 5; string s = strings[0]; Allowing an int to be inserted into a list of strings and subsequently extracted as a string. This would be a breach of type safety. However, there are certain interfaces where the above cannot occur, notably where there is no way to insert an object into the collection. Such an interface is IEnumerable<T>. If instead you say: IEnumerable<object> objects = strings; There is no way we can put the wrong kind of thing into strings through objects, because objects doesn’t have a method that takes an element in. Variance is about allowing assignments such as this in cases where it is safe. The result is that a lot of situations that were previously surprising now just work. Covariance In .NET 4.0 the IEnumerable<T> interface will be declared in the following way: public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IEnumerator { bool MoveNext(); T Current { get; } } The “out” in these declarations signifies that the T can only occur in output position in the interface – the compiler will complain otherwise. In return for this restriction, the interface becomes “covariant” in T, which means that an IEnumerable<A> is considered an IEnumerable<B> if A has a reference conversion to B. As a result, any sequence of strings is also e.g. a sequence of objects. This is useful e.g. in many LINQ methods. Using the declarations above: var result = strings.Union(objects); // succeeds with an IEnumerable<object> This would previously have been disallowed, and you would have had to to some cumbersome wrapping to get the two sequences to have the same element type. Contravariance Type parameters can also have an “in” modifier, restricting them to occur only in input positions. An example is IComparer<T>: public interface IComparer<in T> { public int Compare(T left, T right); } The somewhat baffling result is that an IComparer<object> can in fact be considered an IComparer<string>! It makes sense when you think about it: If a comparer can compare any two objects, it can certainly also compare two strings. This property is referred to as contravariance. A generic type can have both in and out modifiers on its type parameters, as is the case with the Func<…> delegate types: public delegate TResult Func<in TArg, out TResult>(TArg arg); Obviously the argument only ever comes in, and the result only ever comes out. Therefore a Func<object,string> can in fact be used as a Func<string,object>. Limitations Variant type parameters can only be declared on interfaces and delegate types, due to a restriction in the CLR. Variance only applies when there is a reference conversion between the type arguments. For instance, an IEnumerable<int> is not an IEnumerable<object> because the conversion from int to object is a boxing conversion, not a reference conversion. Also please note that the CTP does not contain the new versions of the .NET types mentioned above. In order to experiment with variance you have to declare your own variant interfaces and delegate types. COM Example Here is a larger Office automation example that shows many of the new C# features in action. using System; using System.Diagnostics; using System.Linq; using Excel = Microsoft.Office.Interop.Excel; using Word = Microsoft.Office.Interop.Word; class Program { static void Main(string[] args) { var excel = new Excel.Application(); excel.Visible = true; excel.Workbooks.Add(); // optional arguments omitted excel.Cells[1, 1].Value = "Process Name"; // no casts; Value dynamically excel.Cells[1, 2].Value = "Memory Usage"; // accessed var processes = Process.GetProcesses() .OrderByDescending(p =&gt; p.WorkingSet) .Take(10); int i = 2; foreach (var p in processes) { excel.Cells[i, 1].Value = p.ProcessName; // no casts excel.Cells[i, 2].Value = p.WorkingSet; // no casts i++; } Excel.Range range = excel.Cells[1, 1]; // no casts Excel.Chart chart = excel.ActiveWorkbook.Charts. Add(After: excel.ActiveSheet); // named and optional arguments chart.ChartWizard( Source: range.CurrentRegion, Title: "Memory Usage in " + Environment.MachineName); //named+optional chart.ChartStyle = 45; chart.CopyPicture(Excel.XlPictureAppearance.xlScreen, Excel.XlCopyPictureFormat.xlBitmap, Excel.XlPictureAppearance.xlScreen); var word = new Word.Application(); word.Visible = true; word.Documents.Add(); // optional arguments word.Selection.Paste(); } } The code is much more terse and readable than the C# 3.0 counterpart. Note especially how the Value property is accessed dynamically. This is actually an indexed property, i.e. a property that takes an argument; something which C# does not understand. However the argument is optional. Since the access is dynamic, it goes through the runtime COM binder which knows to substitute the default value and call the indexed property. Thus, dynamic COM allows you to avoid accesses to the puzzling Value2 property of Excel ranges. Relationship with Visual Basic A number of the features introduced to C# 4.0 already exist or will be introduced in some form or other in Visual Basic: · Late binding in VB is similar in many ways to dynamic lookup in C#, and can be expected to make more use of the DLR in the future, leading to further parity with C#. · Named and optional arguments have been part of Visual Basic for a long time, and the C# version of the feature is explicitly engineered with maximal VB interoperability in mind. · NoPIA and variance are both being introduced to VB and C# at the same time. VB in turn is adding a number of features that have hitherto been a mainstay of C#. As a result future versions of C# and VB will have much better feature parity, for the benefit of everyone. Resources All available resources concerning C# 4.0 can be accessed through the C# Dev Center. Specifically, this white paper and other resources can be found at the Code Gallery site. Enjoy! span.fullpost {display:none;}

    Read the article

  • Developer Training – Various Options for Maximum Benefit – Part 4

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 If you have been reading this series, by now you are aware of all the pros and cons that can come along with training.  We’ve asked and answered hard questions, and investigated them “whys” and “hows” of training.  Now it is time to talk about all the different kinds of training that are out there! On Job Training The most common type of training is on the job training.  Everyone receives this kind of education – even experts who come in to consult have to be taught where the printer, pens, and copy machines are.  If you are thinking about more concrete topics, though, on the job training can be some of the easiest to come across.  Picture this: someone in the company whom you really admire is hard at work on a project.  You come up to them and ask to help them out – if they are a busy developer, the odds are that they will say “yes, please!”   If you phrase your question as an offer of help, you can receive training without ever putting someone in the awkward position of acting as a mentor.  However, some people may want the task of being a mentor.  It can never hurt to ask.  Most people will be more than willing to pass their knowledge along. Extreme Programming If your company and coworkers are willing, you can even investigate Extreme Programming.  This is a type of programming that allows small teams to quickly develop code and products that are released with almost immediate user feedback.  You can find more information at http://www.extremeprogramming.org/.  If this is something your company could use, suggest it to your supervisor.  Even if they say no, it will make it clear that you are a go-getter who is interested in new and exciting projects.  If the answer is yes, then you have the opportunity to get some of the best on the job training around. In Person Training Click on Image to Enlarge When you say the word “training,” most people’s minds go back to the classroom, an image they are familiar with.  While training doesn’t always have to be in a traditional setting, because it is so familiar it can also be the most valuable type of training.  There are many ways to get training through a live instructor.  Some companies may be willing to send a representative to you, where employees will get training, sometimes food and coffee, and a live instructor who can answer questions immediately.  Sometimes these trainers are also able to do consultations at the same time, which can invaluable to a company.  If you are the one to asks your supervisor for a training session that can also be turned into a consultation, you may stick in their minds as an incredibly dedicated employee.  If you can’t find a representative, local colleges can also be a good resource for free or cheap classes – or they may have representatives coming who are willing to take on a few more students. Benefits of On Demand Developer Training Of course, you can often get the best of all these types of training with online or On Demand training.  You can get the benefit of a live instructor who is willing to answer questions (although in this case, usually through e-mail or other online venues), there are often real-world examples to follow along – like on the job training – and best of all you can learn whenever you have the time or need.  Did a problem with your server come up at midnight when all your supervisors are safe at home and probably in bed?  No problem!  On Demand training is especially useful if you need to slow down, pause, or rewind a training session.  Not even a real-life instructor can do that! When I was writing this blog post, I felt that each of the subject, which I have covered can be blog posts of itself. However, I wanted to keep the the blog post concise and so touch based on three major training aspects 1) On Job Training 2) In Person Training and 3) Online training. Here is the question for you – is there any other kind of training methods available, which are effective and one should consider it? If yes, what are those, I may write a follow up blog post on the same subject next week. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • To ORM or Not to ORM. That is the question&hellip;

    - by Patrick Liekhus
    UPDATE:  Thanks for the feedback and comments.  I have adjusted my table below with your recommendations.  I had missed a point or two. I wanted to do a series on creating an entire project using the EDMX XAF code generation and the SpecFlow BDD Easy Test tools discussed in my earlier posts, but I thought it would be appropriate to start with a simple comparison and reasoning on why I choose to use these tools. Let’s start by defining the term ORM, or Object-Relational Mapping.  According to Wikipedia it is defined as the following: Object-relational mapping (ORM, O/RM, and O/R mapping) in computer software is a programming technique for converting data between incompatible type systems in object-oriented programming languages. This creates, in effect, a "virtual object database" that can be used from within the programming language. Why should you care?  Basically it allows you to map your business objects in code to their persistence layer behind them. And better yet, why would you want to do this?  Let me outline it in the following points: Development speed.  No more need to map repetitive tasks query results to object members.  Once the map is created the code is rendered for you. Persistence portability.  The ORM knows how to map SQL specific syntax for the persistence engine you choose.  It does not matter if it is SQL Server, Oracle and another database of your choosing. Standard/Boilerplate code is simplified.  The basic CRUD operations are consistent and case use database metadata for basic operations. So how does this help?  Well, let’s compare some of the ORM tools that I have used and/or researched.  I have been interested in ORM for some time now.  My ORM of choice for a long time was NHibernate and I still believe it has a strong case in some business situations.  However, you have to take business considerations into account and the law of diminishing returns.  Because of these two factors, my recent activity and experience has been around DevExpress eXpress Persistence Objects (XPO).  The primary reason for this is because they have the DevExpress eXpress Application Framework (XAF) that sits on top of XPO.  With this added value, the data model can be created (either database first of code first) and the Web and Windows client can be created from these maps.  While out of the box they provide some simple list and detail screens, you can verify easily extend and modify these to your liking.  DevExpress has done a tremendous job of providing enough framework while also staying out of the way when you need to extend it.  This sounds worse than it really is.  What I mean by this is that if you choose to follow DevExpress coding style and recommendations, the hooks and extension points provided allow you to do some pretty heavy lifting while also not worrying about the basics. I have put together a list of the top features that I have used to compare the limited list of ORM’s that I have exposure with.  Again, the biggest selling point in my opinion is that XPO is just a solid as any of the other ORM’s but with the added layer of XAF they become unstoppable.  And then couple that with the EDMX modeling tools and code generation, it becomes a no brainer. Designer Features Entity Framework NHibernate Fluent w/ Nhibernate Telerik OpenAccess DevExpress XPO DevExpress XPO/XAF plus Liekhus Tools Uses XML to map relationships - Yes - - -   Visual class designer interface Yes - - - - Yes Management integrated w/ Visual Studio Yes - - Yes - Yes Supports schema first approach Yes - - Yes - Yes Supports model first approach Yes - - Yes Yes Yes Supports code first approach Yes Yes Yes Yes Yes Yes Attribute driven coding style Yes - Yes - Yes Yes                 I have a very small team and limited resources with a lot of responsibilities.  In order to keep up with our customers, we must rely on tools like these.  We use the EDMX tool so that we can create a visual representation of the applications with our customers.  Second, we rely on the code generation so that we can focus on the business problems at hand and not whether a field is mapped correctly.  This keeps us from requiring as many junior level developers on our team.  I have also worked on multiple teams where they believed in writing their own “framework”.  In my experiences and opinion this is not the route to take unless you have a team dedicated to supporting just the framework.  Each time that I have worked on custom frameworks, the framework eventually becomes old, out dated and full of “performance” enhancements specific to one or two requirements.  With an ORM, there are a lot smarter people than me working on the bigger issue of persistence and performance.  Again, my recommendation would be to use an available framework and get to working on your business domain problems.  If your coding is not making money for you, why are you working on it?  Do you really need to be writing query to object member code again and again? Thanks

    Read the article

  • Parallelism in .NET – Part 17, Think Continuations, not Callbacks

    - by Reed
    In traditional asynchronous programming, we’d often use a callback to handle notification of a background task’s completion.  The Task class in the Task Parallel Library introduces a cleaner alternative to the traditional callback: continuation tasks. Asynchronous programming methods typically required callback functions.  For example, MSDN’s Asynchronous Delegates Programming Sample shows a class that factorizes a number.  The original method in the example has the following signature: public static bool Factorize(int number, ref int primefactor1, ref int primefactor2) { //... .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, calling this is quite “tricky”, even if we modernize the sample to use lambda expressions via C# 3.0.  Normally, we could call this method like so: int primeFactor1 = 0; int primeFactor2 = 0; bool answer = Factorize(10298312, ref primeFactor1, ref primeFactor2); Console.WriteLine("{0}/{1} [Succeeded {2}]", primeFactor1, primeFactor2, answer); If we want to make this operation run in the background, and report to the console via a callback, things get tricker.  First, we need a delegate definition: public delegate bool AsyncFactorCaller( int number, ref int primefactor1, ref int primefactor2); Then we need to use BeginInvoke to run this method asynchronously: int primeFactor1 = 0; int primeFactor2 = 0; AsyncFactorCaller caller = new AsyncFactorCaller(Factorize); caller.BeginInvoke(10298312, ref primeFactor1, ref primeFactor2, result => { int factor1 = 0; int factor2 = 0; bool answer = caller.EndInvoke(ref factor1, ref factor2, result); Console.WriteLine("{0}/{1} [Succeeded {2}]", factor1, factor2, answer); }, null); This works, but is quite difficult to understand from a conceptual standpoint.  To combat this, the framework added the Event-based Asynchronous Pattern, but it isn’t much easier to understand or author. Using .NET 4’s new Task<T> class and a continuation, we can dramatically simplify the implementation of the above code, as well as make it much more understandable.  We do this via the Task.ContinueWith method.  This method will schedule a new Task upon completion of the original task, and provide the original Task (including its Result if it’s a Task<T>) as an argument.  Using Task, we can eliminate the delegate, and rewrite this code like so: var background = Task.Factory.StartNew( () => { int primeFactor1 = 0; int primeFactor2 = 0; bool result = Factorize(10298312, ref primeFactor1, ref primeFactor2); return new { Result = result, Factor1 = primeFactor1, Factor2 = primeFactor2 }; }); background.ContinueWith(task => Console.WriteLine("{0}/{1} [Succeeded {2}]", task.Result.Factor1, task.Result.Factor2, task.Result.Result)); This is much simpler to understand, in my opinion.  Here, we’re explicitly asking to start a new task, then continue the task with a resulting task.  In our case, our method used ref parameters (this was from the MSDN Sample), so there is a little bit of extra boiler plate involved, but the code is at least easy to understand. That being said, this isn’t dramatically shorter when compared with our C# 3 port of the MSDN code above.  However, if we were to extend our requirements a bit, we can start to see more advantages to the Task based approach.  For example, supposed we need to report the results in a user interface control instead of reporting it to the Console.  This would be a common operation, but now, we have to think about marshaling our calls back to the user interface.  This is probably going to require calling Control.Invoke or Dispatcher.Invoke within our callback, forcing us to specify a delegate within the delegate.  The maintainability and ease of understanding drops.  However, just as a standard Task can be created with a TaskScheduler that uses the UI synchronization context, so too can we continue a task with a specific context.  There are Task.ContinueWith method overloads which allow you to provide a TaskScheduler.  This means you can schedule the continuation to run on the UI thread, by simply doing: Task.Factory.StartNew( () => { int primeFactor1 = 0; int primeFactor2 = 0; bool result = Factorize(10298312, ref primeFactor1, ref primeFactor2); return new { Result = result, Factor1 = primeFactor1, Factor2 = primeFactor2 }; }).ContinueWith(task => textBox1.Text = string.Format("{0}/{1} [Succeeded {2}]", task.Result.Factor1, task.Result.Factor2, task.Result.Result), TaskScheduler.FromCurrentSynchronizationContext()); This is far more understandable than the alternative.  By using Task.ContinueWith in conjunction with TaskScheduler.FromCurrentSynchronizationContext(), we get a simple way to push any work onto a background thread, and update the user interface on the proper UI thread.  This technique works with Windows Presentation Foundation as well as Windows Forms, with no change in methodology.

    Read the article

  • My Red Gate Experience

    - by Colin Rothwell
    I’m Colin, and I’ve been an intern working with Mike in publishing on Simple-Talk and SQLServerCentral for the past ten weeks. I’ve mostly been working “behind the scenes”, making improvements to the spam filtering, along with various other small tweaks. When I arrived at Red Gate, one of the first things Mike asked me was what I wanted to get out of the internship. It wasn’t a question I’d given a great deal of thought to, but my immediate response was the same as almost anybody: to support my growing family. Well, ok, not quite that, but money was certainly a motivator, along with simply making sure that I didn’t get bored over the summer. Three months is a long time to fill, and many of my friends end up getting bored, or worse, knitting obsessively. With the arrogance which seems fairly common among Cambridge people, I wasn’t expecting to really learn much here! In my mind, the part of the year where I am at Uni is the part where I learn things, whilst Red Gate would be an opportunity to apply what I’d learnt. Thankfully, the opposite is true: I’ve learnt a lot during my time here, and there has been a definite positive impact on the way I write code. The first thing I’ve really learnt is that test-driven development is, in general, a sensible way of working. Before coming, I didn’t really get it: how could you test something you hadn’t yet written? It didn’t make sense! My problem was seeing a test as having to test all the behaviour of a given function. Writing tests which test the bare minimum possible and building them up is a really good way of crystallising the direction the code needs to grow in, and ensures you never attempt to write too much code at time. One really good experience of this was early on in my internship when Mike and I were working on the query used to list active authors: I’d written something which I thought would do the trick, but by starting again using TDD we grew something which revealed that there were several subtle mistakes in the query I’d written. I’ve also been awakened to the value of pair programming. Whilst I could sort of see the point before coming, I also thought that it was impossible that two people would ever get more done at the same computer than if they were working separately. I still think that this is true for projects with pieces that developers can easily work on independently, and with developers who both know the codebase, but I’ve found that pair programming can be really good for learning a code base, and for building up small projects to the point where you can start working on separate components, as well as solving particularly difficult problems. Later on in my internship, for my down tools week project, I was working on adding Python support to Glimpse. Another intern and I we pair programmed the entire project, using ping pong pair programming as much as possible. One bonus that this brought which I wasn’t expecting was that I found myself less prone to distraction: with someone else peering over my shoulder, I didn’t have the ever-present temptation to open gmail, or facebook, or yammer, or twitter, or hacker news, or reddit, and so on, and so forth. I’m quite proud of this project: I think it’s some of the best code I’ve written. I’ve also been really won over to the value of descriptive variables names. In my pre-Red Gate life, as a lone-ranger style cowboy programmer, I’d developed a tendency towards laziness in variable names, sometimes abbreviating or, worse, using acronyms. I’ve swiftly realised that this is a bad idea when working with a team: saving a few key strokes is inevitably not worth it when it comes to reading code again in the future. Longer names also mean you can do away with a majority of comments. I appreciate that if you’ve come up with an O(n*log n) algorithm for something which seemed O(n^2), you probably want to explain how it works, but explaining what a variable name means is a big no no: it’s so very easy to change the behaviour of the code, whilst forgetting about the comments. Whilst at Red Gate, I took the opportunity to attend a code retreat, which really helped me to solidify all the things I’d learnt. To be completely free of any existing code base really lets you focus on best practises and think about how you write code. If you get a chance to go on a similar event, I’d highly recommend it! Cycling to Red Gate, I’ve also become much better at fitting inner tubes: if you’re struggling to get the tube out, or re-fit the tire, letting a bit of air out usually helps. I’ve also become quite a bit better at foosball and will miss having a foosball table! I’d like to finish off by saying thank you to everyone at Red Gate for having me. I’ve really enjoyed working with, and learning from, the team that brings you this web site. If you meet any of them, buy them a drink!

    Read the article

  • The Joy Of Hex

    - by Jim Giercyk
    While working on a mainframe integration project, it occurred to me that some basic computer concepts are slipping into obscurity. For example, just about anyone can tell you that a 64-bit processor is faster than a 32-bit processer. A grade school child could tell you that a computer “speaks” in ‘1’s and ‘0’s. Some people can even tell you that there are 8 bits in a byte. However, I have found that even the most seasoned developers often can’t explain the theory behind those statements. That is not a knock on programmers; in the age of IntelliSense, what reason do we have to work with data at the bit level? Many computer theory classes treat bit-level programming as a thing of the past, no longer necessary now that storage space is plentiful. The trouble with that mindset is that the world is full of legacy systems that run programs written in the 1970’s.  Today our jobs require us to extract data from those systems, regardless of the format, and that often involves low-level programming. Because it seems knowledge of the low-level concepts is waning in recent times, I thought a review would be in order.       CHARACTER: See Spot Run HEX: 53 65 65 20 53 70 6F 74 20 52 75 6E DECIMAL: 83 101 101 32 83 112 111 116 32 82 117 110 BINARY: 01010011 01100101 01100101 00100000 01010011 01110000 01101111 01110100 00100000 01010010 01110101 01101110 In this example, I have broken down the words “See Spot Run” to a level computers can understand – machine language.     CHARACTER:  The character level is what is rendered by the computer.  A “Character Set” or “Code Page” contains 256 characters, both printable and unprintable.  Each character represents 1 BYTE of data.  For example, the character string “See Spot Run” is 12 Bytes long, exclusive of the quotation marks.  Remember, a SPACE is an unprintable character, but it still requires a byte.  In the example I have used the default Windows character set, ASCII, which you can see here:  http://www.asciitable.com/ HEX:  Hex is short for hexadecimal, or Base 16.  Humans are comfortable thinking in base ten, perhaps because they have 10 fingers and 10 toes; fingers and toes are called digits, so it’s not much of a stretch.  Computers think in Base 16, with numeric values ranging from zero to fifteen, or 0 – F.  Each decimal place has a possible 16 values as opposed to a possible 10 values in base 10.  Therefore, the number 10 in Hex is equal to the number 16 in Decimal.  DECIMAL:  The Decimal conversion is strictly for us humans to use for calculations and conversions.  It is much easier for us humans to calculate that [30 – 10 = 20] in decimal than it is for us to calculate [1E – A = 14] in Hex.  In the old days, an error in a program could be found by determining the displacement from the entry point of a module.  Since those values were dumped from the computers head, they were in hex. A programmer needed to convert them to decimal, do the equation and convert back to hex.  This gets into relative and absolute addressing, a topic for another day.  BINARY:  Binary, or machine code, is where any value can be expressed in 1s and 0s.  It is really Base 2, because each decimal place can have a possibility of only 2 characters, a 1 or a 0.  In Binary, the number 10 is equal to the number 2 in decimal. Why only 1s and 0s?  Very simply, computers are made up of lots and lots of transistors which at any given moment can be ON ( 1 ) or OFF ( 0 ).  Each transistor is a bit, and the order that the transistors fire (or not fire) is what distinguishes one value from  another in the computers head (or CPU).  Consider 32 bit vs 64 bit processing…..a 64 bit processor has the capability to read 64 transistors at a time.  A 32 bit processor can only read half as many at a time, so in theory the 64 bit processor should be much faster.  There are many more factors involved in CPU performance, but that is the fundamental difference.    DECIMAL HEX BINARY 0 0 0000 1 1 0001 2 2 0010 3 3 0011 4 4 0100 5 5 0101 6 6 0110 7 7 0111 8 8 1000 9 9 1001 10 A 1010 11 B 1011 12 C 1100 13 D 1101 14 E 1110 15 F 1111   Remember that each character is a BYTE, there are 2 HEX characters in a byte (called nibbles) and 8 BITS in a byte.  I hope you enjoyed reading about the theory of data processing.  This is just a high-level explanation, and there is much more to be learned.  It is safe to say that, no matter how advanced our programming languages and visual studios become, they are nothing more than a way to interpret bits and bytes.  There is nothing like the joy of hex to get the mind racing.

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • Java Champion Stephen Chin on New Features and Functionality in JavaFX

    - by janice.heiss(at)oracle.com
    In an Oracle Technology Network interview, Java Champion Stephen Chin, Chief Agile Methodologist for GXS, and one of the most prolific and innovative JavaFX developers, provides an update on the rapidly developing changes in JavaFX.Chin expressed enthusiasm about recent JavaFX developments:"There is a lot to be excited about -- JavaFX has a new API face. All the JavaFX 2.0 APIs will be exposed via Java classes that will make it much easier to integrate Java server and client code. This also opens up some huge possibilities for JVM language integration with JavaFX." Chin also spoke about developments in Visage, the new language project created to fill the gap left by JavaFX Script:"It's a domain-specific language for writing user interfaces, which addresses the needs of UI developers. Visage takes over where JavaFX Script left off, providing a statically typed, declarative language with lots of features to make UI development a pleasure.""My favorite language features from Visage are the object literal syntax for quickly building scene graphs and the bind keyword for connecting your UI to the backend model. However, the language is built for UI development from the top down, including subtle details like null-safe dereferencing for exception-less code."Read the entire article.

    Read the article

  • Using Microsoft's Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the Getting Started article rendered a column chart with seven columns whose labels and values were defined statically in the <asp:Series> tag's <Points> collection. Given this functionality, it should come as no surprise that the Microsoft Chart Controls also support serialization. Serialization is the process of persisting the state of a control or an object to some other medium, such as to disk. Deserialization is the inverse process, and involves taking the persisted data and recreating the control or object. With just a few lines of code you can persist the appearance settings, the data, or both to a file on disk or to any stream. Likewise, it takes just a few lines of codes to reconstitute a chart from the persisted information. This article shows how to use the Microsoft Chart Control's serialization functionality by examining a demo application that allows users to create custom charts, specifying the data to plot and some appearance-related settings. The user can then save a "snapshot" of this chart, which persists its appearance and data to a record in a database. From another page, users can view these saved chart snapshots. Read on to learn more! Read More >

    Read the article

  • Don’t learn SSDT, learn about your databases instead

    - by jamiet
    Last Thursday I presented my session “Introduction to SSDT” at the SQL Supper event held at the offices of 7 Digital (loved the samosas, guys). I did my usual spiel, tour of the IDE, connected development, declarative database development yadda yadda yadda… and at the end asked if there were any questions. One gentleman in attendance (sorry, can’t remember your name) raised his hand and stated that by attempting to evangelise all of the features I’d missed the single biggest benefit of SSDT, that it can tell you stuff about database that you didn’t already know. I realised that he was dead right. SSDT allows you to import your whole database schema into a new project and it will instantly give you a list of errors and/or warnings pertaining to the objects in your database. Invalid references (e.g a long-forgotten stored procedure that refers to a non-existent column), unnecessary 3-part naming, incorrect case usage, syntax errors…it’ll tell you about all of ‘em! Turn on static code analysis (this article shows you how) and you’ll learn even more such as any stored procedures that begin with “sp_”, WHERE clauses that will kill performance, use of @@IDENTITY instead of SCOPE_IDENTITY(), use of deprecated syntax, implicit casts etc…. the list goes on and on. I urge you to download and install SSDT (takes a few minutes, its free and you don’t need SQL Server or Visual Studio pre-installed), start a new project: right-click on your new project and import from your database: and see what happens: You may be surprised what you discover. Let me know in the comments below what results you get, total number of objects, number of errors/warnings, I’d be interested to know! @Jamiet

    Read the article

  • New version of SQL Server Data Tools is now available

    - by jamiet
    If you don’t follow the SQL Server Data Tools (SSDT) blog then you may not know that two days ago an updated version of SSDT was released (and by SSDT I mean the database projects, not the SSIS/SSRS/SSAS stuff) along with a new version of the SSDT Power Tools. This release incorporates a an updated version of the SQL Server Data Tier Application Framework (aka DAC Framework, aka DacFX) which you can read about on Adam Mahood’s blog post SQL Server Data-Tier Application Framework (September 2012) Available. DacFX is essentially all the gubbins that you need to extract and publish .dacpacs and according to Adam’s post it incorporates a new feature that I think is very interesting indeed: Extract DACPAC with data – Creates a database snapshot file (.dacpac) from a live SQL Server or Windows Azure SQL Database that contains data from user tables in addition to the database schema. These packages can be published to a new or existing SQL Server or Windows Azure SQL Database using the SqlPackage.exe Publish action. Data contained in package replaces the existing data in the target database. In short, .dacpacs can now include data as well as schema. I’m very excited about this because one of my long-standing complaints about SSDT (and its many forebears) is that whilst it has great support for declarative development of schema it does not provide anything similar for data – if you want to deploy data from your SSDT projects then you have to write Post-Deployment MERGE scripts. This new feature for .dacpacs does not change that situation yet however it is a very important pre-requisite so I am hoping that a feature to provide declaration of data (in addition to declaration of schema which we have today) is going to light up in SSDT in the not too distant future. Read more about the latest SSDT, Power Tools & DacFX releases at: Now available: SQL Server Data Tools - September 2012 update! by Janet Yeilding New SSDT Power Tools! Now for both Visual Studio 2010 and Visual Studio 2012 by Sarah McDevitt SQL Server Data-Tier Application Framework (September 2012) Available by Adam Mahood @Jamiet

    Read the article

  • Can I delete libc-bin?

    - by Balazs Szikszay
    Question is simple, I need to know because I cant upgrade/install anything, because it always says I have to uninstall/delete it to continue. It also says dont do it, if I dont know what I am doing. EDIT: szikszay@szikszay-Latitude-E5530-non-vPro:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not installed Depends: libqtgui4:i386 but it is not installed Depends: libqt4-dbus:i386 but it is not installed Depends: libqt4-network:i386 but it is not installed Depends: libqt4-opengl:i386 but it is not installed Depends: libqt4-qt3support:i386 but it is not installed Depends: libqt4-script:i386 but it is not installed Depends: libqt4-scripttools:i386 but it is not installed Depends: libqt4-sql:i386 but it is not installed Depends: libqt4-svg:i386 but it is not installed Depends: libqt4-test:i386 but it is not installed Depends: libqt4-xml:i386 but it is not installed Depends: libqt4-xmlpatterns:i386 but it is not installed Depends: libcups2:i386 but it is not installed Depends: libcupsimage2:i386 but it is not installed Depends: libcurl3:i386 but it is not installed Depends: libnss3:i386 but it is not installed Depends: libnspr4:i386 but it is not installed Depends: libssl1.0.0:i386 but it is not installed Recommends: libgl1-mesa-glx:i386 but it is not installed Recommends: libgl1-mesa-dri:i386 but it is not installed lib32ffi6 : Depends: libc6-i386 (= 2.4) but it is not installed lib32gcc1 : Depends: libc6-i386 (= 2.5) but it is not installed lib32nss-mdns : Depends: libc6-i386 (= 2.4) but it is not installed lib32stdc++6 : Depends: libc6-i386 (= 2.4) but it is not installed lib32z1 : Depends: libc6-i386 (= 2.4) but it is not installed libacl1:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libattr1:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libaudio2:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libavahi-client3:i386 : Depends: libc6:i386 (= 2.4) but it is not installed Depends: libdbus-1-3:i386 (= 1.1.1) but it is not installed libavahi-common3:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libcomerr2:i386 : Depends: libc6:i386 (= 2.12) but it is not installed libdb5.1:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libdrm-intel1:i386 : Depends: libc6:i386 (= 2.3.4) but it is not installed libdrm-nouveau1a:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libdrm-radeon1:i386 : Depends: libc6:i386 (= 2.3.4) but it is not installed libdrm2:i386 : Depends: libc6:i386 (= 2.7) but it is not installed libffi6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libfontconfig1:i386 : Depends: libc6:i386 (= 2.7) but it is not installed Depends: libexpat1:i386 (= 1.95.8) but it is not installed Depends: libfreetype6:i386 (= 2.2.1) but it is not installed libgcc1:i386 : Depends: libc6:i386 (= 2.2.4) but it is not installed libgcrypt11:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libgdbm3:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libglib2.0-0:i386 : Depends: libc6:i386 (= 2.9) but it is not installed libgpg-error0:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libice6:i386 : Depends: libc6:i386 (= 2.11) but it is not installed libidn11:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libjpeg62:i386 : Depends: libc6:i386 (= 2.7) but it is not installed libkeyutils1:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed liblcms1:i386 : Depends: libc6:i386 (= 2.7) but it is not installed libllvm2.9:i386 : Depends: libc6:i386 (= 2.11) but it is not installed libmng1:i386 : Depends: libc6:i386 (= 2.11) but it is not installed libpciaccess0:i386 : Depends: libc6:i386 (= 2.7) but it is not installed libpcre3:i386 : Depends: libc6:i386 (= 2.4) but it is not installed librtmp0:i386 : Depends: libc6:i386 (= 2.7) but it is not installed Depends: libgnutls26:i386 (= 2.9.11-0) but it is not installed libsasl2-2:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libsasl2-modules:i386 : Depends: libc6:i386 (= 2.4) but it is not installed Depends: libssl1.0.0:i386 (= 1.0.0) but it is not installed libselinux1:i386 : Depends: libc6:i386 (= 2.8) but it is not installed libsm6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libsqlite3-0:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libstdc++6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libuuid1:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libx11-6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libxau6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libxcb1:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libxdamage1:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libxdmcp6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libxext6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libxfixes3:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libxrender1:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libxss1:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libxt6:i386 : Depends: libc6:i386 (= 2.7) but it is not installed libxxf86vm1:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed zlib1g:i386 : Depends: libc6:i386 (= 2.4) but it is not installed E: Unmet dependencies. Try using -f. szikszay@szikszay-Latitude-E5530-non-vPro:~$ sudo apt-get upgrade -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be REMOVED libc-bin The following NEW packages will be installed libc-bin:i386 libc6:i386 libc6-i386 libcups2:i386 libcupsimage2:i386 libcurl3:i386 libdbus-1-3:i386 libexpat1:i386 libfreetype6:i386 libgl1-mesa-dri:i386 libgl1-mesa-glx:i386 libglapi-mesa:i386 libgnutls26:i386 libgssapi-krb5-2:i386 libk5crypto3:i386 libkrb5-3:i386 libkrb5support0:i386 libldap-2.4-2:i386 libnspr4:i386 libnss3:i386 libpng12-0:i386 libqt4-dbus:i386 libqt4-declarative:i386 libqt4-designer:i386 libqt4-network:i386 libqt4-opengl:i386 libqt4-qt3support:i386 libqt4-script:i386 libqt4-scripttools:i386 libqt4-sql:i386 libqt4-svg:i386 libqt4-test:i386 libqt4-xml:i386 libqt4-xmlpatterns:i386 libqtcore4:i386 libqtgui4:i386 libssl1.0.0:i386 libtasn1-3:i386 libtiff4:i386 libxi6:i386 The following packages have been kept back: ginn libgrip0 linux-headers-generic linux-image-generic unity unity-common xserver-xorg-input-evdev xserver-xorg-input-synaptics The following packages will be upgraded: accountsservice acpi-support acpid aisleriot alsa-utils app-install-data-partner apparmor appmenu-qt apport apport-gtk apt apt-transport-https apt-utils aptdaemon aptdaemon-data apturl apturl-common at-spi2-core bamfdaemon banshee banshee-extension-soundmenu banshee-extension-ubuntuonemusicstore baobab bind9-host binutils bluez bluez-alsa bluez-cups bluez-gstreamer brasero brasero-cdrkit brasero-common brltty bzip2 ca-certificates-java checkbox checkbox-gtk colord command-not-found command-not-found-data compiz compiz-core compiz-gnome compiz-plugins-default compiz-plugins-main-default cups cups-bsd cups-client cups-common cups-ppdc dbus dbus-x11 deja-dup desktop-file-utils dnsutils dpkg ecryptfs-utils empathy empathy-common eog evince evince-common evolution-data-server evolution-data-server-common file-roller firefox firefox-globalmenu firefox-gnome-support firefox-locale-en firefox-locale-hu gbrainy gcalctool gconf2 gconf2-common gedit gedit-common ghostscript ghostscript-cups ghostscript-x gir1.2-atspi-2.0 gir1.2-gconf-2.0 gir1.2-gnomebluetooth-1.0 gir1.2-gtk-3.0 gir1.2-gtksource-3.0 gir1.2-totem-1.0 gir1.2-unity-4.0 gir1.2-webkit-3.0 gnome-accessibility-themes gnome-bluetooth gnome-control-center gnome-control-center-data gnome-desktop3-data gnome-font-viewer gnome-games-common gnome-icon-theme gnome-keyring gnome-mahjongg gnome-online-accounts gnome-orca gnome-power-manager gnome-screenshot gnome-search-tool gnome-session gnome-session-bin gnome-session-canberra gnome-session-common gnome-settings-daemon gnome-sudoku gnome-system-log gnome-system-monitor gnome-utils-common gnomine gnupg gpgv grub-common grub-pc grub-pc-bin grub2-common gstreamer0.10-gconf gstreamer0.10-plugins-good gstreamer0.10-pulseaudio gvfs gvfs-backends gvfs-bin gvfs-fuse gwibber gwibber-service gwibber-service-facebook gwibber-service-identica gwibber-service-twitter gzip hpijs hplip hplip-cups hplip-data icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-netx ifupdown im-switch indicator-datetime indicator-session indicator-sound initramfs-tools initramfs-tools-bin initscripts insserv isc-dhcp-client isc-dhcp-common iso-codes jockey-common jockey-gtk language-pack-en language-pack-en-base language-pack-gnome-en language-pack-gnome-en-base language-pack-gnome-hu language-pack-gnome-hu-base language-pack-hu language-pack-hu-base language-selector-common language-selector-gnome libaccountsservice0 libapt-inst1.3 libapt-pkg4.11 libarchive1 libasound2-plugins libatk-adaptor libatspi2.0-0 libbamf0 libbamf3-0 libbind9-60 libbluetooth3 libbrasero-media3-1 libbrlapi0.5 libbz2-1.0 libc-dev-bin libc6 libc6-dev libcamel-1.2-29 libcanberra-gtk-module libcanberra-gtk0 libcanberra-gtk3-0 libcanberra-gtk3-module libcanberra-pulse libcanberra0 libcolord1 libcups2 libcupscgi1 libcupsdriver1 libcupsimage2 libcupsmime1 libcupsppdc1 libcurl3-gnutls libdbus-1-3 libdbus-glib-1-2 libdecoration0 libdns69 libebackend-1.2-1 libebook1.2-12 libecal1.2-10 libecryptfs0 libedata-book-1.2-11 libedata-cal-1.2-13 libedataserver1.2-15 libedataserverui-3.0-1 libevince3-3 libexif12 libexpat1 libfreetype6 libgail-3-0 libgail-3-common libgck-1-0 libgconf2-4 libgcr-3-1 libgdata-common libgdata13 libgl1-mesa-dri libgl1-mesa-glx libglapi-mesa libglu1-mesa libgnome-bluetooth8 libgnome-control-center1 libgnome-desktop-3-2 libgnutls26 libgoa-1.0-0 libgs9 libgs9-common libgssapi-krb5-2 libgtk-3-0 libgtk-3-bin libgtk-3-common libgtksourceview-3.0-0 libgtksourceview-3.0-common libgudev-1.0-0 libgweather-3-0 libgweather-common libgwibber-gtk2 libgwibber2 libhpmud0 libicu44 libimobiledevice2 libisc62 libisccc60 libisccfg62 libjasper1 libjs-jquery libk5crypto3 libkrb5-3 libkrb5support0 libldap-2.4-2 liblightdm-gobject-1-0 liblwres60 libmetacity-private0 libmission-control-plugins0 libmono-cairo4.0-cil libmono-corlib4.0-cil libmono-csharp4.0-cil libmono-i18n-west4.0-cil libmono-i18n4.0-cil libmono-posix4.0-cil libmono-security4.0-cil libmono-sharpzip4.84-cil libmono-system-configuration4.0-cil libmono-system-core4.0-cil libmono-system-drawing4.0-cil libmono-system-security4.0-cil libmono-system-xml4.0-cil libmono-system4.0-cil libmono-zeroconf1.0-cil libmysqlclient16 libnautilus-extension1 libncurses5 libncursesw5 libnm-glib-vpn1 libnm-glib4 libnm-gtk-common libnm-gtk0 libnm-util2 libnotify0.4-cil libnspr4 libnss3 libnss3-1d libnux-1.0-0 libnux-1.0-common libpam-gnome-keyring libpam-modules libpam-modules-bin libpam-runtime libpam0g libperl5.12 libpng12-0 libpoppler-glib6 libpoppler13 libproxy0 libpulse-mainloop-glib0 libpulse0 libpurple-bin libpurple0 libpython2.7 libqt4-dbus libqt4-declarative libqt4-network libqt4-opengl libqt4-script libqt4-sql libqt4-sql-mysql libqt4-svg libqt4-xml libqt4-xmlpatterns libqtcore4 libqtgui4 libreoffice-base-core libreoffice-calc libreoffice-common libreoffice-core libreoffice-draw libreoffice-emailmerge libreoffice-gnome libreoffice-gtk libreoffice-help-en-gb libreoffice-help-en-us libreoffice-help-hu libreoffice-impress libreoffice-l10n-common libreoffice-l10n-en-gb libreoffice-l10n-en-za libreoffice-l10n-hu libreoffice-math libreoffice-style-human libreoffice-writer libsane-hpaio libsmbclient libsnmp-base libsnmp15 libssl1.0.0 libsyncdaemon-1.0-1 libt1-5 libtasn1-3 libtiff4 libtinfo5 libtotem0 libubuntuone-1.0-1 libubuntuone1.0-cil libudev0 libunity-core-4.0-4 libunity6 libusbmuxd1 libv4l-0 libvorbis0a libvorbisenc2 libvorbisfile3 libwbclient0 libwebkitgtk-1.0-0 libwebkitgtk-1.0-common libwebkitgtk-3.0-0 libwebkitgtk-3.0-common libxi6 libxml2 libxslt1.1 lightdm linux-firmware linux-libc-dev mawk metacity metacity-common mobile-broadband-provider-info modemmanager mono-4.0-gac mono-gac mono-runtime mousetweaks multiarch-support mysql-common nautilus nautilus-data nautilus-sendto-empathy ncurses-base ncurses-bin network-manager network-manager-gnome nux-tools onboard openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssl perl perl-base perl-modules poppler-utils pulseaudio pulseaudio-esound-compat pulseaudio-module-bluetooth pulseaudio-module-gconf pulseaudio-module-x11 pulseaudio-utils python-apport python-aptdaemon python-aptdaemon-gtk python-aptdaemon.gtk3widgets python-aptdaemon.gtkwidgets python-brlapi python-crypto python-cups python-cupshelpers python-egenix-mxdatetime python-egenix-mxtools python-gobject python-gobject-cairo python-httplib2 python-keyring python-launchpadlib python-libproxy python-libxml2 python-pam python-papyon python-pkg-resources python-problem-report python-pyatspi2 python-software-properties python-ubuntuone-client python-ubuntuone-storageprotocol python-uno python2.7 python2.7-minimal qdbus samba-common samba-common-bin seahorse shotwell simple-scan smbclient sni-qt software-center software-properties-common software-properties-gtk sudo system-config-printer-common system-config-printer-gnome system-config-printer-udev sysv-rc sysvinit-utils telepathy-indicator telepathy-mission-control-5 thunderbird thunderbird-globalmenu thunderbird-gnome-support thunderbird-locale-en thunderbird-locale-en-gb thunderbird-locale-en-us thunderbird-locale-hu tomboy totem totem-common totem-mozilla totem-plugins transmission-common transmission-gtk ttf-opensymbol tzdata tzdata-java ubuntu-desktop ubuntu-docs ubuntu-minimal ubuntu-sso-client ubuntu-standard ubuntuone-client ubuntuone-client-gnome ubuntuone-couch udev unity-lens-applications unity-services uno-libs3 update-manager update-manager-core update-notifier update-notifier-common upstart ure usbmuxd vim-common vim-tiny vinagre vino whois x11-common xdiagnose xorg xserver-common xserver-xorg xserver-xorg-core xserver-xorg-input-all xserver-xorg-video-all xserver-xorg-video-intel xserver-xorg-video-openchrome xserver-xorg-video-qxl xul-ext-ubufox WARNING: The following essential packages will be removed. This should NOT be done unless you know exactly what you are doing! libc-bin 498 upgraded, 40 newly installed, 1 to remove and 8 not upgraded. 69 not fully installed or removed. Need to get 439 MB of archives. After this operation, 135 MB of additional disk space will be used. You are about to do something potentially harmful To continue type in the phrase ‘Yes, do as I say!’ ?] I tried to upgrade but it gives me an error, when i try to upgrade-f it says i should delete libc-bin. Thanks for the answers btw. EDIT2: it also says this: The package system is broken If you are using third party repositories then disable them, since they are a common source of problems. Now run the following command in a terminal: apt-get install -f

    Read the article

  • Residual packages Ubuntu 12.04

    - by hydroxide
    I have an Asus Q500A with win8 and Ubuntu 12.04 64 bit; Linux kernel 3.8.0-32-generic. I have been having residual package issues which have been giving me trouble trying to reconfigure xserver-xorg-lts-raring. I tried removing all residual packages from synaptic but the following were not removed. Output of sudo dpkg -l | grep "^rc" rc gstreamer0.10-plugins-good:i386 0.10.31-1ubuntu1.2 GStreamer plugins from the "good" set rc libaa1:i386 1.4p5-39ubuntu1 ASCII art library rc libaio1:i386 0.3.109-2ubuntu1 Linux kernel AIO access library - shared library rc libao4:i386 1.1.0-1ubuntu2 Cross Platform Audio Output Library rc libasn1-8-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - ASN.1 library rc libasound2:i386 1.0.25-1ubuntu10.2 shared library for ALSA applications rc libasyncns0:i386 0.8-4 Asynchronous name service query library rc libatk1.0-0:i386 2.4.0-0ubuntu1 ATK accessibility toolkit rc libavahi-client3:i386 0.6.30-5ubuntu2 Avahi client library rc libavahi-common3:i386 0.6.30-5ubuntu2 Avahi common library rc libavc1394-0:i386 0.5.3-1ubuntu2 control IEEE 1394 audio/video devices rc libcaca0:i386 0.99.beta17-2.1ubuntu2 colour ASCII art library rc libcairo-gobject2:i386 1.10.2-6.1ubuntu3 The Cairo 2D vector graphics library (GObject library) rc libcairo2:i386 1.10.2-6.1ubuntu3 The Cairo 2D vector graphics library rc libcanberra-gtk0:i386 0.28-3ubuntu3 GTK+ helper for playing widget event sounds with libcanberra rc libcanberra0:i386 0.28-3ubuntu3 simple abstract interface for playing event sounds rc libcap2:i386 1:2.22-1ubuntu3 support for getting/setting POSIX.1e capabilities rc libcdparanoia0:i386 3.10.2+debian-10ubuntu1 audio extraction tool for sampling CDs (library) rc libcroco3:i386 0.6.5-1ubuntu0.1 Cascading Style Sheet (CSS) parsing and manipulation toolkit rc libcups2:i386 1.5.3-0ubuntu8 Common UNIX Printing System(tm) - Core library rc libcupsimage2:i386 1.5.3-0ubuntu8 Common UNIX Printing System(tm) - Raster image library rc libcurl3:i386 7.22.0-3ubuntu4.3 Multi-protocol file transfer library (OpenSSL) rc libdatrie1:i386 0.2.5-3 Double-array trie library rc libdbus-glib-1-2:i386 0.98-1ubuntu1.1 simple interprocess messaging system (GLib-based shared library) rc libdbusmenu-qt2:i386 0.9.2-0ubuntu1 Qt implementation of the DBusMenu protocol rc libdrm-nouveau2:i386 2.4.43-0ubuntu0.0.3 Userspace interface to nouveau-specific kernel DRM services -- runtime rc libdv4:i386 1.0.0-3ubuntu1 software library for DV format digital video (runtime lib) rc libesd0:i386 0.2.41-10build3 Enlightened Sound Daemon - Shared libraries rc libexif12:i386 0.6.20-2ubuntu0.1 library to parse EXIF files rc libexpat1:i386 2.0.1-7.2ubuntu1.1 XML parsing C library - runtime library rc libflac8:i386 1.2.1-6 Free Lossless Audio Codec - runtime C library rc libfontconfig1:i386 2.8.0-3ubuntu9.1 generic font configuration library - runtime rc libfreetype6:i386 2.4.8-1ubuntu2.1 FreeType 2 font engine, shared library files rc libgail18:i386 2.24.10-0ubuntu6 GNOME Accessibility Implementation Library -- shared libraries rc libgconf-2-4:i386 3.2.5-0ubuntu2 GNOME configuration database system (shared libraries) rc libgcrypt11:i386 1.5.0-3ubuntu0.2 LGPL Crypto library - runtime library rc libgd2-xpm:i386 2.0.36~rc1~dfsg-6ubuntu2 GD Graphics Library version 2 rc libgdbm3:i386 1.8.3-10 GNU dbm database routines (runtime version) rc libgdk-pixbuf2.0-0:i386 2.26.1-1 GDK Pixbuf library rc libgif4:i386 4.1.6-9ubuntu1 library for GIF images (library) rc libgl1-mesa-dri-lts-quantal:i386 9.0.3-0ubuntu0.4~precise1 free implementation of the OpenGL API -- DRI modules rc libgl1-mesa-dri-lts-raring:i386 9.1.4-0ubuntu0.1~precise2 free implementation of the OpenGL API -- DRI modules rc libgl1-mesa-glx:i386 8.0.4-0ubuntu0.6 free implementation of the OpenGL API -- GLX runtime rc libgl1-mesa-glx-lts-quantal:i386 9.0.3-0ubuntu0.4~precise1 free implementation of the OpenGL API -- GLX runtime rc libgl1-mesa-glx-lts-raring:i386 9.1.4-0ubuntu0.1~precise2 free implementation of the OpenGL API -- GLX runtime rc libglapi-mesa:i386 8.0.4-0ubuntu0.6 free implementation of the GL API -- shared library rc libglapi-mesa-lts-quantal:i386 9.0.3-0ubuntu0.4~precise1 free implementation of the GL API -- shared library rc libglapi-mesa-lts-raring:i386 9.1.4-0ubuntu0.1~precise2 free implementation of the GL API -- shared library rc libglu1-mesa:i386 8.0.4-0ubuntu0.6 Mesa OpenGL utility library (GLU) rc libgnome-keyring0:i386 3.2.2-2 GNOME keyring services library rc libgnutls26:i386 2.12.14-5ubuntu3.5 GNU TLS library - runtime library rc libgomp1:i386 4.6.3-1ubuntu5 GCC OpenMP (GOMP) support library rc libgpg-error0:i386 1.10-2ubuntu1 library for common error values and messages in GnuPG components rc libgphoto2-2:i386 2.4.13-1ubuntu1.2 gphoto2 digital camera library rc libgphoto2-port0:i386 2.4.13-1ubuntu1.2 gphoto2 digital camera port library rc libgssapi-krb5-2:i386 1.10+dfsg~beta1-2ubuntu0.3 MIT Kerberos runtime libraries - krb5 GSS-API Mechanism rc libgssapi3-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - GSSAPI support library rc libgstreamer-plugins-base0.10-0:i386 0.10.36-1ubuntu0.1 GStreamer libraries from the "base" set rc libgstreamer0.10-0:i386 0.10.36-1ubuntu1 Core GStreamer libraries and elements rc libgtk2.0-0:i386 2.24.10-0ubuntu6 GTK+ graphical user interface library rc libgudev-1.0-0:i386 1:175-0ubuntu9.4 GObject-based wrapper library for libudev rc libhcrypto4-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - crypto library rc libheimbase1-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - Base library rc libheimntlm0-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - NTLM support library rc libhx509-5-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - X509 support library rc libibus-1.0-0:i386 1.4.1-3ubuntu1 Intelligent Input Bus - shared library rc libice6:i386 2:1.0.7-2build1 X11 Inter-Client Exchange library rc libidn11:i386 1.23-2 GNU Libidn library, implementation of IETF IDN specifications rc libiec61883-0:i386 1.2.0-0.1ubuntu1 an partial implementation of IEC 61883 rc libieee1284-3:i386 0.2.11-10build1 cross-platform library for parallel port access rc libjack-jackd2-0:i386 1.9.8~dfsg.1-1ubuntu2 JACK Audio Connection Kit (libraries) rc libjasper1:i386 1.900.1-13 JasPer JPEG-2000 runtime library rc libjpeg-turbo8:i386 1.1.90+svn733-0ubuntu4.2 IJG JPEG compliant runtime library. rc libjson0:i386 0.9-1ubuntu1 JSON manipulation library - shared library rc libk5crypto3:i386 1.10+dfsg~beta1-2ubuntu0.3 MIT Kerberos runtime libraries - Crypto Library rc libkeyutils1:i386 1.5.2-2 Linux Key Management Utilities (library) rc libkrb5-26-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - libraries rc libkrb5-3:i386 1.10+dfsg~beta1-2ubuntu0.3 MIT Kerberos runtime libraries rc libkrb5support0:i386 1.10+dfsg~beta1-2ubuntu0.3 MIT Kerberos runtime libraries - Support library rc liblcms1:i386 1.19.dfsg-1ubuntu3 Little CMS color management library rc libldap-2.4-2:i386 2.4.28-1.1ubuntu4.4 OpenLDAP libraries rc libllvm3.0:i386 3.0-4ubuntu1 Low-Level Virtual Machine (LLVM), runtime library rc libllvm3.1:i386 3.1-2ubuntu1~12.04.1 Low-Level Virtual Machine (LLVM), runtime library rc libllvm3.2:i386 3.2-2ubuntu5~precise1 Low-Level Virtual Machine (LLVM), runtime library rc libltdl7:i386 2.4.2-1ubuntu1 A system independent dlopen wrapper for GNU libtool rc libmad0:i386 0.15.1b-7ubuntu1 MPEG audio decoder library rc libmikmod2:i386 3.1.12-2 Portable sound library rc libmng1:i386 1.0.10-3 Multiple-image Network Graphics library rc libmpg123-0:i386 1.12.1-3.2ubuntu1 MPEG layer 1/2/3 audio decoder -- runtime library rc libmysqlclient18:i386 5.5.32-0ubuntu0.12.04.1 MySQL database client library rc libnspr4:i386 4.9.5-0ubuntu0.12.04.1 NetScape Portable Runtime Library rc libnss3:i386 3.14.3-0ubuntu0.12.04.1 Network Security Service libraries rc libodbc1:i386 2.2.14p2-5ubuntu3 ODBC library for Unix rc libogg0:i386 1.2.2~dfsg-1ubuntu1 Ogg bitstream library rc libopenal1:i386 1:1.13-4ubuntu3 Software implementation of the OpenAL API (shared library) rc liborc-0.4-0:i386 1:0.4.16-1ubuntu2 Library of Optimized Inner Loops Runtime Compiler rc libosmesa6:i386 8.0.4-0ubuntu0.6 Mesa Off-screen rendering extension rc libp11-kit0:i386 0.12-2ubuntu1 Library for loading and coordinating access to PKCS#11 modules - runtime rc libpango1.0-0:i386 1.30.0-0ubuntu3.1 Layout and rendering of internationalized text rc libpixman-1-0:i386 0.24.4-1 pixel-manipulation library for X and cairo rc libproxy1:i386 0.4.7-0ubuntu4.1 automatic proxy configuration management library (shared) rc libpulse-mainloop-glib0:i386 1:1.1-0ubuntu15.4 PulseAudio client libraries (glib support) rc libpulse0:i386 1:1.1-0ubuntu15.4 PulseAudio client libraries rc libqt4-dbus:i386 4:4.8.1-0ubuntu4.4 Qt 4 D-Bus module rc libqt4-declarative:i386 4:4.8.1-0ubuntu4.4 Qt 4 Declarative module rc libqt4-designer:i386 4:4.8.1-0ubuntu4.4 Qt 4 designer module rc libqt4-network:i386 4:4.8.1-0ubuntu4.4 Qt 4 network module rc libqt4-opengl:i386 4:4.8.1-0ubuntu4.4 Qt 4 OpenGL module rc libqt4-qt3support:i386 4:4.8.1-0ubuntu4.4 Qt 3 compatibility library for Qt 4 rc libqt4-script:i386 4:4.8.1-0ubuntu4.4 Qt 4 script module rc libqt4-scripttools:i386 4:4.8.1-0ubuntu4.4 Qt 4 script tools module rc libqt4-sql:i386 4:4.8.1-0ubuntu4.4 Qt 4 SQL module rc libqt4-svg:i386 4:4.8.1-0ubuntu4.4 Qt 4 SVG module rc libqt4-test:i386 4:4.8.1-0ubuntu4.4 Qt 4 test module rc libqt4-xml:i386 4:4.8.1-0ubuntu4.4 Qt 4 XML module rc libqt4-xmlpatterns:i386 4:4.8.1-0ubuntu4.4 Qt 4 XML patterns module rc libqtcore4:i386 4:4.8.1-0ubuntu4.4 Qt 4 core module rc libqtgui4:i386 4:4.8.1-0ubuntu4.4 Qt 4 GUI module rc libqtwebkit4:i386 2.2.1-1ubuntu4 Web content engine library for Qt rc libraw1394-11:i386 2.0.7-1ubuntu1 library for direct access to IEEE 1394 bus (aka FireWire) rc libroken18-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - roken support library rc librsvg2-2:i386 2.36.1-0ubuntu1 SAX-based renderer library for SVG files (runtime) rc librtmp0:i386 2.4~20110711.gitc28f1bab-1 toolkit for RTMP streams (shared library) rc libsamplerate0:i386 0.1.8-4 Audio sample rate conversion library rc libsane:i386 1.0.22-7ubuntu1 API library for scanners rc libsasl2-2:i386 2.1.25.dfsg1-3ubuntu0.1 Cyrus SASL - authentication abstraction library rc libsdl-image1.2:i386 1.2.10-3 image loading library for Simple DirectMedia Layer 1.2 rc libsdl-mixer1.2:i386 1.2.11-7 Mixer library for Simple DirectMedia Layer 1.2, libraries rc libsdl-net1.2:i386 1.2.7-5 Network library for Simple DirectMedia Layer 1.2, libraries rc libsdl-ttf2.0-0:i386 2.0.9-1.1ubuntu1 ttf library for Simple DirectMedia Layer with FreeType 2 support rc libsdl1.2debian:i386 1.2.14-6.4ubuntu3 Simple DirectMedia Layer rc libshout3:i386 2.2.2-7ubuntu1 MP3/Ogg Vorbis broadcast streaming library rc libsm6:i386 2:1.2.0-2build1 X11 Session Management library rc libsndfile1:i386 1.0.25-4 Library for reading/writing audio files rc libsoup-gnome2.4-1:i386 2.38.1-1 HTTP library implementation in C -- GNOME support library rc libsoup2.4-1:i386 2.38.1-1 HTTP library implementation in C -- Shared library rc libspeex1:i386 1.2~rc1-3ubuntu2 The Speex codec runtime library rc libspeexdsp1:i386 1.2~rc1-3ubuntu2 The Speex extended runtime library rc libsqlite3-0:i386 3.7.9-2ubuntu1.1 SQLite 3 shared library rc libssl0.9.8:i386 0.9.8o-7ubuntu3.1 SSL shared libraries rc libstdc++5:i386 1:3.3.6-25ubuntu1 The GNU Standard C++ Library v3 rc libstdc++6:i386 4.6.3-1ubuntu5 GNU Standard C++ Library v3 rc libtag1-vanilla:i386 1.7-1ubuntu5 audio meta-data library - vanilla flavour rc libtasn1-3:i386 2.10-1ubuntu1.1 Manage ASN.1 structures (runtime) rc libtdb1:i386 1.2.9-4 Trivial Database - shared library rc libthai0:i386 0.1.16-3 Thai language support library rc libtheora0:i386 1.1.1+dfsg.1-3ubuntu2 The Theora Video Compression Codec rc libtiff4:i386 3.9.5-2ubuntu1.5 Tag Image File Format (TIFF) library rc libtxc-dxtn-s2tc0:i386 0~git20110809-2.1 Texture compression library for Mesa rc libunistring0:i386 0.9.3-5 Unicode string library for C rc libusb-0.1-4:i386 2:0.1.12-20 userspace USB programming library rc libv4l-0:i386 0.8.6-1ubuntu2 Collection of video4linux support libraries rc libv4lconvert0:i386 0.8.6-1ubuntu2 Video4linux frame format conversion library rc libvisual-0.4-0:i386 0.4.0-4 Audio visualization framework rc libvorbis0a:i386 1.3.2-1ubuntu3 The Vorbis General Audio Compression Codec (Decoder library) rc libvorbisenc2:i386 1.3.2-1ubuntu3 The Vorbis General Audio Compression Codec (Encoder library) rc libvorbisfile3:i386 1.3.2-1ubuntu3 The Vorbis General Audio Compression Codec (High Level API) rc libwavpack1:i386 4.60.1-2 audio codec (lossy and lossless) - library rc libwind0-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - stringprep implementation rc libwrap0:i386 7.6.q-21 Wietse Venema's TCP wrappers library rc libx11-6:i386 2:1.4.99.1-0ubuntu2.2 X11 client-side library rc libx11-xcb1:i386 2:1.4.99.1-0ubuntu2.2 Xlib/XCB interface library rc libxau6:i386 1:1.0.6-4 X11 authorisation library rc libxaw7:i386 2:1.0.9-3ubuntu1 X11 Athena Widget library rc libxcb-dri2-0:i386 1.8.1-1ubuntu0.2 X C Binding, dri2 extension rc libxcb-glx0:i386 1.8.1-1ubuntu0.2 X C Binding, glx extension rc libxcb-render0:i386 1.8.1-1ubuntu0.2 X C Binding, render extension rc libxcb-shm0:i386 1.8.1-1ubuntu0.2 X C Binding, shm extension rc libxcb1:i386 1.8.1-1ubuntu0.2 X C Binding rc libxcomposite1:i386 1:0.4.3-2build1 X11 Composite extension library rc libxcursor1:i386 1:1.1.12-1ubuntu0.1 X cursor management library rc libxdamage1:i386 1:1.1.3-2build1 X11 damaged region extension library rc libxdmcp6:i386 1:1.1.0-4 X11 Display Manager Control Protocol library rc libxext6:i386 2:1.3.0-3ubuntu0.1 X11 miscellaneous extension library rc libxfixes3:i386 1:5.0-4ubuntu4.1 X11 miscellaneous 'fixes' extension library rc libxft2:i386 2.2.0-3ubuntu2 FreeType-based font drawing library for X rc libxi6:i386 2:1.6.0-0ubuntu2.1 X11 Input extension library rc libxinerama1:i386 2:1.1.1-3ubuntu0.1 X11 Xinerama extension library rc libxml2:i386 2.7.8.dfsg-5.1ubuntu4.6 GNOME XML library rc libxmu6:i386 2:1.1.0-3 X11 miscellaneous utility library rc libxp6:i386 1:1.0.1-2ubuntu0.12.04.1 X Printing Extension (Xprint) client library rc libxpm4:i386 1:3.5.9-4 X11 pixmap library rc libxrandr2:i386 2:1.3.2-2ubuntu0.2 X11 RandR extension library rc libxrender1:i386 1:0.9.6-2ubuntu0.1 X Rendering Extension client library rc libxslt1.1:i386 1.1.26-8ubuntu1.3 XSLT 1.0 processing library - runtime library rc libxss1:i386 1:1.2.1-2 X11 Screen Saver extension library rc libxt6:i386 1:1.1.1-2ubuntu0.1 X11 toolkit intrinsics library rc libxtst6:i386 2:1.2.0-4ubuntu0.1 X11 Testing -- Record extension library rc libxv1:i386 2:1.0.6-2ubuntu0.1 X11 Video extension library rc libxxf86vm1:i386 1:1.1.1-2ubuntu0.1 X11 XFree86 video mode extension library rc odbcinst1debian2:i386 2.2.14p2-5ubuntu3 Support library for accessing odbc ini files rc skype-bin:i386 4.2.0.11-0ubuntu0.12.04.2 client for Skype VOIP and instant messaging service - binary files rc sni-qt:i386 0.2.5-0ubuntu3 indicator support for Qt rc wine-compholio:i386 1.7.4~ubuntu12.04.1 The Compholio Edition is a special build of the popular Wine software rc xaw3dg:i386 1.5+E-18.1ubuntu1 Xaw3d widget set

    Read the article

  • Getting Started with Puppet on Oracle Solaris 11

    - by Glynn Foster
    One of the exciting enhancements with Oracle Solaris 11.2 has been the introduction of Puppet. While upstream Puppet did have some rudimentary support for Oracle Solaris 11, Drew Fisher and Ginnie Wray worked tirelessly to add enhance the Oracle Solaris Puppet offering. We've talked to customers over the past few years and asked them what their problems were and what technologies they were using, particularly for configuration management. Puppet came up time and time again, and it made a huge amount of sense bringing it as a 1st class citizen in the Oracle Solaris platform. So what is Puppet, and why is it useful? To quote from PuppetLabs, the guys who are responsible for creating Puppet: Puppet is a declarative, model-based approach to IT automation, helping you manage infrastructure throughout its lifecycle, from provisioning and configuration to orchestration and reporting. Using Puppet, you can easily automate repetitive tasks, quickly deploy critical applications, and proactively manage change, scaling from 10s of servers to 1000s, on-premise or in the cloud. What's more, with Puppet support for Oracle Solaris, administrators can now manage a completely heterogeneous data center from a single or series of Puppet masters. Better still, it's an excellent tool when combined with our new compliance framework to ensure you're meeting your compliance regulations. We're not stopping there of course, and we'll enhance our offerings over time, and work with PuppetLabs to get some of this support upstream (or into the Puppet Forge). So if you've heard some of the buzz around Puppet and never quite got started, and have some Oracle Solaris real estate that you'd love to manage, check out the Getting Started with Puppet on Oracle Solaris 11 guide.

    Read the article

  • Using Microsoft's Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the Getting Started article rendered a column chart with seven columns whose labels and values were defined statically in the <asp:Series> tag's <Points> collection. Given this functionality, it should come as no surprise that the Microsoft Chart Controls also support serialization. Serialization is the process of persisting the state of a control or an object to some other medium, such as to disk. Deserialization is the inverse process, and involves taking the persisted data and recreating the control or object. With just a few lines of code you can persist the appearance settings, the data, or both to a file on disk or to any stream. Likewise, it takes just a few lines of codes to reconstitute a chart from the persisted information. This article shows how to use the Microsoft Chart Control's serialization functionality by examining a demo application that allows users to create custom charts, specifying the data to plot and some appearance-related settings. The user can then save a "snapshot" of this chart, which persists its appearance and data to a record in a database. From another page, users can view these saved chart snapshots. Read on to learn more! Read More >

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >