Search Results

Search found 122511 results on 4901 pages for 'i need help'.

Page 439/4901 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • Introduction to the ASP.NET Web API

    - by Stephen.Walther
    I am a huge fan of Ajax. If you want to create a great experience for the users of your website – regardless of whether you are building an ASP.NET MVC or an ASP.NET Web Forms site — then you need to use Ajax. Otherwise, you are just being cruel to your customers. We use Ajax extensively in several of the ASP.NET applications that my company, Superexpert.com, builds. We expose data from the server as JSON and use jQuery to retrieve and update that data from the browser. One challenge, when building an ASP.NET website, is deciding on which technology to use to expose JSON data from the server. For example, how do you expose a list of products from the server as JSON so you can retrieve the list of products with jQuery? You have a number of options (too many options) including ASMX Web services, WCF Web Services, ASHX Generic Handlers, WCF Data Services, and MVC controller actions. Fortunately, the world has just been simplified. With the release of ASP.NET 4 Beta, Microsoft has introduced a new technology for exposing JSON from the server named the ASP.NET Web API. You can use the ASP.NET Web API with both ASP.NET MVC and ASP.NET Web Forms applications. The goal of this blog post is to provide you with a brief overview of the features of the new ASP.NET Web API. You learn how to use the ASP.NET Web API to retrieve, insert, update, and delete database records with jQuery. We also discuss how you can perform form validation when using the Web API and use OData when using the Web API. Creating an ASP.NET Web API Controller The ASP.NET Web API exposes JSON data through a new type of controller called an API controller. You can add an API controller to an existing ASP.NET MVC 4 project through the standard Add Controller dialog box. Right-click your Controllers folder and select Add, Controller. In the dialog box, name your controller MovieController and select the Empty API controller template: A brand new API controller looks like this: using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { } } An API controller, unlike a standard MVC controller, derives from the base ApiController class instead of the base Controller class. Using jQuery to Retrieve, Insert, Update, and Delete Data Let’s create an Ajaxified Movie Database application. We’ll retrieve, insert, update, and delete movies using jQuery with the MovieController which we just created. Our Movie model class looks like this: namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } public string Title { get; set; } public string Director { get; set; } } } Our application will consist of a single HTML page named Movies.html. We’ll place all of our jQuery code in the Movies.html page. Getting a Single Record with the ASP.NET Web API To support retrieving a single movie from the server, we need to add a Get method to our API controller: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public Movie GetMovie(int id) { // Return movie by id if (id == 1) { return new Movie { Id = 1, Title = "Star Wars", Director = "Lucas" }; } // Otherwise, movie was not found throw new HttpResponseException(HttpStatusCode.NotFound); } } } In the code above, the GetMovie() method accepts the Id of a movie. If the Id has the value 1 then the method returns the movie Star Wars. Otherwise, the method throws an exception and returns 404 Not Found HTTP status code. After building your project, you can invoke the MovieController.GetMovie() method by entering the following URL in your web browser address bar: http://localhost:[port]/api/movie/1 (You’ll need to enter the correct randomly generated port). In the URL api/movie/1, the first “api” segment indicates that this is a Web API route. The “movie” segment indicates that the MovieController should be invoked. You do not specify the name of the action. Instead, the HTTP method used to make the request – GET, POST, PUT, DELETE — is used to identify the action to invoke. The ASP.NET Web API uses different routing conventions than normal ASP.NET MVC controllers. When you make an HTTP GET request then any API controller method with a name that starts with “GET” is invoked. So, we could have called our API controller action GetPopcorn() instead of GetMovie() and it would still be invoked by the URL api/movie/1. The default route for the Web API is defined in the Global.asax file and it looks like this: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); We can invoke our GetMovie() controller action with the jQuery code in the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Get Movie</title> </head> <body> <div> Title: <span id="title"></span> </div> <div> Director: <span id="director"></span> </div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> getMovie(1, function (movie) { $("#title").html(movie.Title); $("#director").html(movie.Director); }); function getMovie(id, callback) { $.ajax({ url: "/api/Movie", data: { id: id }, type: "GET", contentType: "application/json;charset=utf-8", statusCode: { 200: function (movie) { callback(movie); }, 404: function () { alert("Not Found!"); } } }); } </script> </body> </html> In the code above, the jQuery $.ajax() method is used to invoke the GetMovie() method. Notice that the Ajax call handles two HTTP response codes. When the GetMove() method successfully returns a movie, the method returns a 200 status code. In that case, the details of the movie are displayed in the HTML page. Otherwise, if the movie is not found, the GetMovie() method returns a 404 status code. In that case, the page simply displays an alert box indicating that the movie was not found (hopefully, you would implement something more graceful in an actual application). You can use your browser’s Developer Tools to see what is going on in the background when you open the HTML page (hit F12 in the most recent version of most browsers). For example, you can use the Network tab in Google Chrome to see the Ajax request which invokes the GetMovie() method: Getting a Set of Records with the ASP.NET Web API Let’s modify our Movie API controller so that it returns a collection of movies. The following Movie controller has a new ListMovies() method which returns a (hard-coded) collection of movies: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public IEnumerable<Movie> ListMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=1, Title="King Kong", Director="Jackson"}, new Movie {Id=1, Title="Memento", Director="Nolan"} }; } } } Because we named our action ListMovies(), the default Web API route will never match it. Therefore, we need to add the following custom route to our Global.asax file (at the top of the RegisterRoutes() method): routes.MapHttpRoute( name: "ActionApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); This route enables us to invoke the ListMovies() method with the URL /api/movie/listmovies. Now that we have exposed our collection of movies from the server, we can retrieve and display the list of movies using jQuery in our HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>List Movies</title> </head> <body> <div id="movies"></div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> listMovies(function (movies) { var strMovies=""; $.each(movies, function (index, movie) { strMovies += "<div>" + movie.Title + "</div>"; }); $("#movies").html(strMovies); }); function listMovies(callback) { $.ajax({ url: "/api/Movie/ListMovies", data: {}, type: "GET", contentType: "application/json;charset=utf-8", }).then(function(movies){ callback(movies); }); } </script> </body> </html>     Inserting a Record with the ASP.NET Web API Now let’s modify our Movie API controller so it supports creating new records: public HttpResponseMessage<Movie> PostMovie(Movie movieToCreate) { // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } The PostMovie() method in the code above accepts a movieToCreate parameter. We don’t actually store the new movie anywhere. In real life, you will want to call a service method to store the new movie in a database. When you create a new resource, such as a new movie, you should return the location of the new resource. In the code above, the URL where the new movie can be retrieved is assigned to the Location header returned in the PostMovie() response. Because the name of our method starts with “Post”, we don’t need to create a custom route. The PostMovie() method can be invoked with the URL /Movie/PostMovie – just as long as the method is invoked within the context of a HTTP POST request. The following HTML page invokes the PostMovie() method. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "Jackson" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }); function createMovie(movieToCreate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); } </script> </body> </html> This page creates a new movie (the Hobbit) by calling the createMovie() method. The page simply displays the Id of the new movie: The HTTP Post operation is performed with the following call to the jQuery $.ajax() method: $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); Notice that the type of Ajax request is a POST request. This is required to match the PostMovie() method. Notice, furthermore, that the new movie is converted into JSON using JSON.stringify(). The JSON.stringify() method takes a JavaScript object and converts it into a JSON string. Finally, notice that success is represented with a 201 status code. The HttpStatusCode.Created value returned from the PostMovie() method returns a 201 status code. Updating a Record with the ASP.NET Web API Here’s how we can modify the Movie API controller to support updating an existing record. In this case, we need to create a PUT method to handle an HTTP PUT request: public void PutMovie(Movie movieToUpdate) { if (movieToUpdate.Id == 1) { // Update the movie in the database return; } // If you can't find the movie to update throw new HttpResponseException(HttpStatusCode.NotFound); } Unlike our PostMovie() method, the PutMovie() method does not return a result. The action either updates the database or, if the movie cannot be found, returns an HTTP Status code of 404. The following HTML page illustrates how you can invoke the PutMovie() method: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Put Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToUpdate = { id: 1, title: "The Hobbit", director: "Jackson" }; updateMovie(movieToUpdate, function () { alert("Movie updated!"); }); function updateMovie(movieToUpdate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToUpdate), type: "PUT", contentType: "application/json;charset=utf-8", statusCode: { 200: function () { callback(); }, 404: function () { alert("Movie not found!"); } } }); } </script> </body> </html> Deleting a Record with the ASP.NET Web API Here’s the code for deleting a movie: public HttpResponseMessage DeleteMovie(int id) { // Delete the movie from the database // Return status code return new HttpResponseMessage(HttpStatusCode.NoContent); } This method simply deletes the movie (well, not really, but pretend that it does) and returns a No Content status code (204). The following page illustrates how you can invoke the DeleteMovie() action: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Delete Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> deleteMovie(1, function () { alert("Movie deleted!"); }); function deleteMovie(id, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify({id:id}), type: "DELETE", contentType: "application/json;charset=utf-8", statusCode: { 204: function () { callback(); } } }); } </script> </body> </html> Performing Validation How do you perform form validation when using the ASP.NET Web API? Because validation in ASP.NET MVC is driven by the Default Model Binder, and because the Web API uses the Default Model Binder, you get validation for free. Let’s modify our Movie class so it includes some of the standard validation attributes: using System.ComponentModel.DataAnnotations; namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } [Required(ErrorMessage="Title is required!")] [StringLength(5, ErrorMessage="Title cannot be more than 5 characters!")] public string Title { get; set; } [Required(ErrorMessage="Director is required!")] public string Director { get; set; } } } In the code above, the Required validation attribute is used to make both the Title and Director properties required. The StringLength attribute is used to require the length of the movie title to be no more than 5 characters. Now let’s modify our PostMovie() action to validate a movie before adding the movie to the database: public HttpResponseMessage PostMovie(Movie movieToCreate) { // Validate movie if (!ModelState.IsValid) { var errors = new JsonArray(); foreach (var prop in ModelState.Values) { if (prop.Errors.Any()) { errors.Add(prop.Errors.First().ErrorMessage); } } return new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } If ModelState.IsValid has the value false then the errors in model state are copied to a new JSON array. Each property – such as the Title and Director property — can have multiple errors. In the code above, only the first error message is copied over. The JSON array is returned with a Bad Request status code (400 status code). The following HTML page illustrates how you can invoke our modified PostMovie() action and display any error messages: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }, function (errors) { var strErrors = ""; $.each(errors, function(index, err) { strErrors += "*" + err + "\n"; }); alert(strErrors); } ); function createMovie(movieToCreate, success, fail) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { success(newMovie); }, 400: function (xhr) { var errors = JSON.parse(xhr.responseText); fail(errors); } } }); } </script> </body> </html> The createMovie() function performs an Ajax request and handles either a 201 or a 400 status code from the response. If a 201 status code is returned then there were no validation errors and the new movie was created. If, on the other hand, a 400 status code is returned then there was a validation error. The validation errors are retrieved from the XmlHttpRequest responseText property. The error messages are displayed in an alert: (Please don’t use JavaScript alert dialogs to display validation errors, I just did it this way out of pure laziness) This validation code in our PostMovie() method is pretty generic. There is nothing specific about this code to the PostMovie() method. In the following video, Jon Galloway demonstrates how to create a global Validation filter which can be used with any API controller action: http://www.asp.net/web-api/overview/web-api-routing-and-actions/video-custom-validation His validation filter looks like this: using System.Json; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http.Controllers; using System.Web.Http.Filters; namespace MyWebAPIApp.Filters { public class ValidationActionFilter:ActionFilterAttribute { public override void OnActionExecuting(HttpActionContext actionContext) { var modelState = actionContext.ModelState; if (!modelState.IsValid) { dynamic errors = new JsonObject(); foreach (var key in modelState.Keys) { var state = modelState[key]; if (state.Errors.Any()) { errors[key] = state.Errors.First().ErrorMessage; } } actionContext.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } } } } And you can register the validation filter in the Application_Start() method in the Global.asax file like this: GlobalConfiguration.Configuration.Filters.Add(new ValidationActionFilter()); After you register the Validation filter, validation error messages are returned from any API controller action method automatically when validation fails. You don’t need to add any special logic to any of your API controller actions to take advantage of the filter. Querying using OData The OData protocol is an open protocol created by Microsoft which enables you to perform queries over the web. The official website for OData is located here: http://odata.org For example, here are some of the query options which you can use with OData: · $orderby – Enables you to retrieve results in a certain order. · $top – Enables you to retrieve a certain number of results. · $skip – Enables you to skip over a certain number of results (use with $top for paging). · $filter – Enables you to filter the results returned. The ASP.NET Web API supports a subset of the OData protocol. You can use all of the query options listed above when interacting with an API controller. The only requirement is that the API controller action returns its data as IQueryable. For example, the following Movie controller has an action named GetMovies() which returns an IQueryable of movies: public IQueryable<Movie> GetMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=2, Title="King Kong", Director="Jackson"}, new Movie {Id=3, Title="Willow", Director="Lucas"}, new Movie {Id=4, Title="Shrek", Director="Smith"}, new Movie {Id=5, Title="Memento", Director="Nolan"} }.AsQueryable(); } If you enter the following URL in your browser: /api/movie?$top=2&$orderby=Title Then you will limit the movies returned to the top 2 in order of the movie Title. You will get the following results: By using the $top option in combination with the $skip option, you can enable client-side paging. For example, you can use $top and $skip to page through thousands of products, 10 products at a time. The $filter query option is very powerful. You can use this option to filter the results from a query. Here are some examples: Return every movie directed by Lucas: /api/movie?$filter=Director eq ‘Lucas’ Return every movie which has a title which starts with ‘S’: /api/movie?$filter=startswith(Title,’S') Return every movie which has an Id greater than 2: /api/movie?$filter=Id gt 2 The complete documentation for the $filter option is located here: http://www.odata.org/developers/protocols/uri-conventions#FilterSystemQueryOption Summary The goal of this blog entry was to provide you with an overview of the new ASP.NET Web API introduced with the Beta release of ASP.NET 4. In this post, I discussed how you can retrieve, insert, update, and delete data by using jQuery with the Web API. I also discussed how you can use the standard validation attributes with the Web API. You learned how to return validation error messages to the client and display the error messages using jQuery. Finally, we briefly discussed how the ASP.NET Web API supports the OData protocol. For example, you learned how to filter records returned from an API controller action by using the $filter query option. I’m excited about the new Web API. This is a feature which I expect to use with almost every ASP.NET application which I build in the future.

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Web Browser Control &ndash; Specifying the IE Version

    - by Rick Strahl
    I use the Internet Explorer Web Browser Control in a lot of my applications to display document type layout. HTML happens to be one of the most common document formats and displaying data in this format – even in desktop applications, is often way easier than using normal desktop technologies. One issue the Web Browser Control has that it’s perpetually stuck in IE 7 rendering mode by default. Even though IE 8 and now 9 have significantly upgraded the IE rendering engine to be more CSS and HTML compliant by default the Web Browser control will have none of it. IE 9 in particular – with its much improved CSS support and basic HTML 5 support is a big improvement and even though the IE control uses some of IE’s internal rendering technology it’s still stuck in the old IE 7 rendering by default. This applies whether you’re using the Web Browser control in a WPF application, a WinForms app, a FoxPro or VB classic application using the ActiveX control. Behind the scenes all these UI platforms use the COM interfaces and so you’re stuck by those same rules. Rendering Challenged To see what I’m talking about here are two screen shots rendering an HTML 5 doctype page that includes some CSS 3 functionality – rounded corners and border shadows - from an earlier post. One uses IE 9 as a standalone browser, and one uses a simple WPF form that includes the Web Browser control. IE 9 Browser:   Web Browser control in a WPF form: The IE 9 page displays this HTML correctly – you see the rounded corners and shadow displayed. Obviously the latter rendering using the Web Browser control in a WPF application is a bit lacking. Not only are the new CSS features missing but the page also renders in Internet Explorer’s quirks mode so all the margins, padding etc. behave differently by default, even though there’s a CSS reset applied on this page. If you’re building an application that intends to use the Web Browser control for a live preview of some HTML this is clearly undesirable. Feature Delegation via Registry Hacks Fortunately starting with Internet Explore 8 and later there’s a fix for this problem via a registry setting. You can specify a registry key to specify which rendering mode and version of IE should be used by that application. These are not global mind you – they have to be enabled for each application individually. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: yourapplication.exe 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: yourapplication.exe The value to set this key to is (taken from MSDN here) as decimal values: 9999 (0x270F) Internet Explorer 9. Webpages are displayed in IE9 Standards mode, regardless of the !DOCTYPE directive. 9000 (0x2328) Internet Explorer 9. Webpages containing standards-based !DOCTYPE directives are displayed in IE9 mode. 8888 (0x22B8) Webpages are displayed in IE8 Standards mode, regardless of the !DOCTYPE directive. 8000 (0x1F40) Webpages containing standards-based !DOCTYPE directives are displayed in IE8 mode. 7000 (0x1B58) Webpages containing standards-based !DOCTYPE directives are displayed in IE7 Standards mode.   The added key looks something like this in the Registry Editor: With this in place my Html Html Help Builder application which has wwhelp.exe as its main executable now works with HTML 5 and CSS 3 documents in the same way that Internet Explorer 9 does. Incidentally I accidentally added an ‘empty’ DWORD value of 0 to my EXE name and that worked as well giving me IE 9 rendering. Although not documented I suspect 0 (or an invalid value) will default to the installed browser. Don’t have a good way to test this but if somebody could try this with IE 8 installed that would be great: What happens when setting 9000 with IE 8 installed? What happens when setting 0 with IE 8 installed? Don’t forget to add Keys for Host Environments If you’re developing your application in Visual Studio and you run the debugger you may find that your application is still not rendering right, but if you run the actual generated EXE from Explorer or the OS command prompt it works. That’s because when you run the debugger in Visual Studio it wraps your application into a debugging host container. For this reason you might want to also add another registry key for yourapp.vshost.exe on your development machine. If you’re developing in Visual FoxPro make sure you add a key for vfp9.exe to see the rendering adjustments in the Visual FoxPro development environment. Cleaner HTML - no more HTML mangling! There are a number of additional benefits to setting up rendering of the Web Browser control to the IE 9 engine (or even the IE 8 engine) beyond the obvious rendering functionality. IE 9 actually returns your HTML in something that resembles the original HTML formatting, as opposed to the IE 7 default format which mangled the original HTML content. If you do the following in the WPF application: private void button2_Click(object sender, RoutedEventArgs e) { dynamic doc = this.webBrowser.Document; MessageBox.Show(doc.body.outerHtml); } you get different output depending on the rendering mode active. With the default IE 7 rendering you get: <BODY><DIV> <H1>Rounded Corners and Shadows - Creating Dialogs in CSS</H1> <DIV class=toolbarcontainer><A class=hoverbutton href="./"><IMG src="../../css/images/home.gif"> Home</A> <A class=hoverbutton href="RoundedCornersAndShadows.htm"><IMG src="../../css/images/refresh.gif"> Refresh</A> </DIV> <DIV class=containercontent> <FIELDSET><LEGEND>Plain Box</LEGEND><!-- Simple Box with rounded corners and shadow --> <DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow"> <DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox">Simple Rounded Corner Box. </DIV></DIV></FIELDSET> <FIELDSET><LEGEND>Box with Header</LEGEND> <DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow"> <DIV class="gridheaderleft roundbox-top">Box with a Header</DIV> <DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox-bottom">Simple Rounded Corner Box. </DIV></DIV></FIELDSET> <FIELDSET><LEGEND>Dialog Style Window</LEGEND> <DIV style="POSITION: relative; WIDTH: 450px" id=divDialog class="dialog boxshadow" jQuery16107208195684204002="2"> <DIV style="POSITION: relative" class=dialog-header> <DIV class=closebox></DIV>User Sign-in <DIV class=closebox jQuery16107208195684204002="3"></DIV></DIV> <DIV class=descriptionheader>This dialog is draggable and closable</DIV> <DIV class=dialog-content><LABEL>Username:</LABEL> <INPUT name=txtUsername value=" "> <LABEL>Password</LABEL> <INPUT name=txtPassword value=" "> <HR> <INPUT id=btnLogin value=Login type=button> </DIV> <DIV class=dialog-statusbar>Ready</DIV></DIV></FIELDSET> </DIV> <SCRIPT type=text/javascript>     $(document).ready(function () {         $("#divDialog")             .draggable({ handle: ".dialog-header" })             .closable({ handle: ".dialog-header",                 closeHandler: function () {                     alert("Window about to be closed.");                     return true;  // true closes - false leaves open                 }             });     }); </SCRIPT> </DIV></BODY> Now lest you think I’m out of my mind and create complete whacky HTML rooted in the last century, here’s the IE 9 rendering mode output which looks a heck of a lot cleaner and a lot closer to my original HTML of the page I’m accessing: <body> <div>         <h1>Rounded Corners and Shadows - Creating Dialogs in CSS</h1>     <div class="toolbarcontainer">         <a class="hoverbutton" href="./"> <img src="../../css/images/home.gif"> Home</a>         <a class="hoverbutton" href="RoundedCornersAndShadows.htm"> <img src="../../css/images/refresh.gif"> Refresh</a>     </div>         <div class="containercontent">     <fieldset>         <legend>Plain Box</legend>                <!-- Simple Box with rounded corners and shadow -->             <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">                              <div style="background: khaki;" class="boxcontenttext roundbox">                     Simple Rounded Corner Box.                 </div>             </div>     </fieldset>     <fieldset>         <legend>Box with Header</legend>         <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">                          <div class="gridheaderleft roundbox-top">Box with a Header</div>             <div style="background: khaki;" class="boxcontenttext roundbox-bottom">                 Simple Rounded Corner Box.             </div>         </div>     </fieldset>       <fieldset>         <legend>Dialog Style Window</legend>         <div style="width: 450px; position: relative;" id="divDialog" class="dialog boxshadow">             <div style="position: relative;" class="dialog-header">                 <div class="closebox"></div>                 User Sign-in             <div class="closebox"></div></div>             <div class="descriptionheader">This dialog is draggable and closable</div>                    <div class="dialog-content">                             <label>Username:</label>                 <input name="txtUsername" value=" " type="text">                 <label>Password</label>                 <input name="txtPassword" value=" " type="text">                                 <hr/>                                 <input id="btnLogin" value="Login" type="button">                        </div>             <div class="dialog-statusbar">Ready</div>         </div>     </fieldset>     </div> <script type="text/javascript">     $(document).ready(function () {         $("#divDialog")             .draggable({ handle: ".dialog-header" })             .closable({ handle: ".dialog-header",                 closeHandler: function () {                     alert("Window about to be closed.");                     return true;  // true closes - false leaves open                 }             });     }); </script>        </div> </body> IOW, in IE9 rendering mode IE9 is much closer (but not identical) to the original HTML from the page on the Web that we’re reading from. As a side note: Unfortunately, the browser feature emulation can't be applied against the Html Help (CHM) Engine in Windows which uses the Web Browser control (or COM interfaces anyway) to render Html Help content. I tried setting up hh.exe which is the help viewer, to use IE 9 rendering but a help file generated with CSS3 features will simply show in IE 7 mode. Bummer - this would have been a nice quick fix to allow help content served from CHM files to look better. HTML Editing leaves HTML formatting intact In the same vane, if you do any inline HTML editing in the control by setting content to be editable, IE 9’s control does a much more reasonable job of creating usable and somewhat valid HTML. It also leaves the original content alone other than the text your are editing or adding. No longer is the HTML output stripped of excess spaces and reformatted in IEs format. So if I do: private void button3_Click(object sender, RoutedEventArgs e) { dynamic doc = this.webBrowser.Document; doc.body.contentEditable = true; } and then make some changes to the document by typing into it using IE 9 mode, the document formatting stays intact and only the affected content is modified. The created HTML is reasonably clean (although it does lack proper XHTML formatting for things like <br/> <hr/>). This is very different from IE 7 mode which mangled the HTML as soon as the page was loaded into the control. Any editing you did stripped out all white space and lost all of your existing XHTML formatting. In IE 9 mode at least *most* of your original formatting stays intact. This is huge! In Html Help Builder I have supported HTML editing for a long time but the HTML mangling by the Web Browser control made it very difficult to edit the HTML later. Previously IE would mangle the HTML by stripping out spaces, upper casing all tags and converting many XHTML safe tags to its HTML 3 tags. Now IE leaves most of my document alone while editing, and creates cleaner and more compliant markup (with exception of self-closing elements like BR/HR). The end result is that I now have HTML editing in place that's much cleaner and actually capable of being manually edited. Caveats, Caveats, Caveats It wouldn't be Internet Explorer if there weren't some major compatibility issues involved in using this various browser version interaction. The biggest thing I ran into is that there are odd differences in some of the COM interfaces and what they return. I specifically ran into a problem with the document.selection.createRange() function which with IE 7 compatibility returns an expected text range object. When running in IE 8 or IE 9 mode however. I could not retrieve a valid text range with this code where loEdit is the WebBrowser control: loRange = loEdit.document.selection.CreateRange() The loRange object returned (here in FoxPro) had a length property of 0 but none of the other properties of the TextRange or TextRangeCollection objects were available. I figured this was due to some changed security settings but even after elevating the Intranet Security Zone and mucking with the other browser feature flags pertaining to security I had no luck. In the end I relented and used a JavaScript function in my editor document that returns a selection range object: function getselectionrange() { var range = document.selection.createRange(); return range; } and call that JavaScript function from my host applications code: *** Use a function in the document to get around HTML Editing issues loRange = loEdit.document.parentWindow.getselectionrange(.f.) and that does work correctly. This wasn't a big deal as I'm already loading a support script file into the editor page so all I had to do is add the function to this existing script file. You can find out more how to call script code in the Web Browser control from a host application in a previous post of mine. IE 8 and 9 also clamp down the security environment a little more than the default IE 7 control, so there may be other issues you run into. Other than the createRange() problem above I haven't seen anything else that is breaking in my code so far though and that's encouraging at least since it uses a lot of HTML document manipulation for the custom editor I've created (and would love to replace - any PROFESSIONAL alternatives anybody?) Registry Key Installation for your Application It’s important to remember that this registry setting is made per application, so most likely this is something you want to set up with your installer. Also remember that 32 and 64 bit settings require separate settings in the registry so if you’re creating your installer you most likely will want to set both keys in the registry preemptively for your application. I use Tarma Installer for all of my application installs and in Tarma I configure registry keys for both and set a flag to only install the latter key group in the 64 bit version: Because this setting is application specific you have to do this for every application you install unfortunately, but this also means that you can safely configure this setting in the registry because it is after only applied to your application. Another problem with install based installation is version detection. If IE 8 is installed I’d want 8000 for the value, if IE 9 is installed I want 9000. I can do this easily in code but in the installer this is much more difficult. I don’t have a good solution for this at the moment, but given that the app works with IE 7 mode now, IE 9 mode is just a bonus for the moment. If IE 9 is not installed and 9000 is used the default rendering will remain in use.   It sure would be nice if we could specify the IE rendering mode as a property, but I suspect the ActiveX container has to know before it loads what actual version to load up and once loaded can only load a single version of IE. This would account for this annoying application level configuration… Summary The registry feature emulation has been available for quite some time, but I just found out about it today and started experimenting around with it. I’m stoked to see that this is available as I’d pretty much given up in ever seeing any better rendering in the Web Browser control. Now at least my apps can take advantage of newer HTML features. Now if we could only get better HTML Editing support somehow <snicker>… ah can’t have everything.© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  FoxPro  Windows  

    Read the article

  • How to fix “SearchAdministration.aspx webpage cannot be found. 404”

    - by ybbest
    Problems: One of my colleague is having a wired issue today with Search Service Application in SharePoint2010.After he created the Search Service Application, he could not browse to the Search Administration (http://ybbest:5555/searchadministration.aspx?appid=6508b5cc-e19a-4bdc-89b3-05d984999e3c) ,he got 404 page not found every time he browse to the page. Analysis After some basic trouble-shooting, it turns out we can browse to any other page in the search application ,e.g. Manage Content Sources(/_admin/search/listcontentsources.aspx) or Manage Crawl Rules(/_admin/search/managecrawlrules.aspx).After some more research , we think some of the web parts in the Search Administration page might cause the problem. Solution You need to activate a hidden feature using #Enable-SPFeature SearchAdminWebParts -url <central admin URL> Enable-SPFeature SearchAdminWebParts -url http://ybbest:5555 If the feature is already enabled, you need to disable the feature first and then enable it. Disable-SPFeature SearchAdminWebParts -url http://ybbest:5555 Enable-SPFeature SearchAdminWebParts -url http://ybbest:5555 References: MSDN Forum

    Read the article

  • VS 2010 JavaScript editor – matching braces highlighting – is it so difficult to implement?

    - by AGS777
    I do not know. Just curious. But first things first. As a web developer I spend about 80% of my work-time editing JavaScript code. And since my server-side platform is .NET then it would be very convenient to have decent JavaScript text editor within Visual Studio IDE. So, Visual Studio 2010 is out. Downloaded and installed. What were my expectations regarding JavaScript editor? Pretty low, actually.  I just wanted to have matching braces highlighted eventually. That’s all. Yes, I know about Ctrl + ] shortcut but it is not event remotely close to convenience. And the result? Alas. Without further ado, just look at some real-world fragment of code from jQuery Templates Proposal experimental plugin as I see it in Notepad++, Notepad2 and Visual Studio 2010 editors respectively: Notepad++ Notepad2 Visual Studio 2010 Look at the highlighted parentheses, regular expression literals, numbers. Do you have a feeling that the last screenshot is not very informative in comparison with the other ones? If yes, then my question is why? Instead I was given an IntelliSense. Sorry, but I do not need it to rot my mind. Especially the one which does not always work properly (try to use it with base2 library for example). With all the expressive power of the language I have to know what I am doing. Instead I still have the same plain old Notepad with some of the JavaScript keywords colorized, plus partially functional/useful IntelliSense. What I do need, is just a little help to make less errors when I type the code – some essential text editor facilities that I really need. Give me that and only then feel free to improve on something else. Maybe I am wrong. Then, sorry. Just cannot believe that I have to wait for another couple of years to get very basic code editor feature.  

    Read the article

  • "[INS-30131] Initial setup required for the execution of installer validations failed." Encountering this error while installing Oracle database 12c. [on hold]

    - by user132992
    I am trying to install Oracle 12c database on my machine running Fedora 20. And I am encountering this problem: "[INS-30131] Initial setup required for the execution of installer validations failed." And when we see the details then it is like this: Cause - Failed to access the temporary location. Action - Ensure that the current user has required permissions to access the temporary location. Additional Information: - Framework setup check failed on all the nodes - Cause: Cause Of Problem Not Available   - Action: User Action Not Available Summary of the failed nodes fedora - Version of exectask could not be retrieved from node "fedora"   - Cause: Cause Of Problem Not Available   - Action: User Action Not Available To eliminate this error I have tried various measures including the change of permission of the tmp folder and restarting the computer but none is working. Plz someone help me out of this. Any kind of help will be appreciated...

    Read the article

  • Solution: Testing Web Services with MSTest on Team Build

    - by Martin Hinshelwood
    Guess what. About 20 minutes after I fixed the build, Allan broke it again! Update: 4th March 2010 – After having huge problems getting this working I read Billy Wang’s post which showed me the light. The problem here is that even though the test passes locally it will not during an Automated Build. When you send your tests to the build server it does not understand that you want to spin up the web site and run tests against that! When you run the test in Visual Studio it spins up the web site anyway, but would you expect your test to pass if you told the website not to spin up? Of course not. So, when you send the code to the build server you need to tell it what to spin up. First, the best way to get the parameters you need is to right click on the method you want to test and select “Create Unit Test”. This will detect wither you are running in IIS or ASP.NET Development Server or None, and create the relevant tags. Figure: Right clicking on “SaveDefaultProjectFile” will produce a context menu with “Create Unit tests…” on it. If you use this option it will AutoDetect most of the Attributes that are required. /// <summary> ///A test for SSW.SQLDeploy.SilverlightUI.Web.Services.IProfileService.SaveDefaultProjectFile ///</summary> // TODO: Ensure that the UrlToTest attribute specifies a URL to an ASP.NET page (for example, // http://.../Default.aspx). This is necessary for the unit test to be executed on the web server, // whether you are testing a page, web service, or a WCF service. [TestMethod()] [HostType("ASP.NET")] [AspNetDevelopmentServerHost("D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web", "/")] [UrlToTest("http://localhost:3100/")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] public void SaveDefaultProjectFileTest() { IProfileService target = new ProfileService(); // TODO: Initialize to an appropriate value string strComputerName = string.Empty; // TODO: Initialize to an appropriate value bool expected = false; // TODO: Initialize to an appropriate value bool actual; actual = target.SaveDefaultProjectFile(strComputerName); Assert.AreEqual(expected, actual); Assert.Inconclusive("Verify the correctness of this test method."); } Figure: Auto created code that shows the attributes required to run correctly in IIS or in this case ASP.NET Development Server If you are a purist and don’t like creating unit tests like this then you just need to add the three attributes manually. HostType – This attribute specified what host to use. Its an extensibility point, so you could write your own. Or you could just use “ASP.NET”. UrlToTest – This specifies the start URL. For most tests it does not matter which page you call, as long as it is a valid page otherwise your test may not run on the server, but may pass anyway. AspNetDevelopmentServerHost – This is a nasty one, it is only used if you are using ASP.NET Development Host and is unnecessary if you are using IIS. This sets the host settings and the first value MUST be the physical path to the root of your web application. OK, so all that was rubbish and I could not get anything working using the MSDN documentation. Google provided very little help until I ran into Billy Wang’s post  and I heard that heavenly music that all developers hear when understanding dawns that what they have been doing up until now is just plain stupid. I am sure that the above will work when I am doing Web Unit Tests, but there is a much easier way when doing web services. You need to add the AspNetDevelopmentServer attribute to your code. This will tell MSTest to spin up an ASP.NET Development server to host the service. Specify the path to the web application you want to use. [AspNetDevelopmentServer("WebApp1", "D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: This AspNetDevelopmentServer will make sure that the specified web application is launched. Now we can run the test and have it pass, but if the dynamically assigned ASP.NET Development server port changes what happens to the details in your app.config that was generated when creating a reference to the web service? Well, it would be wrong and the test would fail. This is where Billy’s helper method comes in. Once you have created an instance of your service call, and it has loaded the config, but before you make any calls to it you need to go in and dynamically set the Endpoint address to the same address as your dynamically hosted Web Application. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Reflection; using System.ServiceModel.Description; using System.ServiceModel; namespace SSW.SQLDeploy.Test { class WcfWebServiceHelper { public static bool TryUrlRedirection(object client, TestContext context, string identifier) { bool result = true; try { PropertyInfo property = client.GetType().GetProperty("Endpoint"); string webServer = context.Properties[string.Format("AspNetDevelopmentServer.{0}", identifier)].ToString(); Uri webServerUri = new Uri(webServer); ServiceEndpoint endpoint = (ServiceEndpoint)property.GetValue(client, null); EndpointAddressBuilder builder = new EndpointAddressBuilder(endpoint.Address); builder.Uri = new Uri(endpoint.Address.Uri.OriginalString.Replace(endpoint.Address.Uri.Authority, webServerUri.Authority)); endpoint.Address = builder.ToEndpointAddress(); } catch (Exception e) { context.WriteLine(e.Message); result = false; } return result; } } } Figure: This fixes a problem with the URL in your web.config not being the same as the dynamically hosted ASP.NET Development server port. We can now add a call to this method after we created the Proxy object and change the Endpoint for the Service to the correct one. This process is wrapped in an assert as if it fails there is no point in continuing. [AspNetDevelopmentServer("WebApp1", D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Editing the Endpoint from the app.config on the fly to match the dynamically hosted ASP.NET Development Server URL and port is now easy. As you can imagine AspNetDevelopmentServer poses some problems of you have multiple developers. What are the chances of everyone using the same location to store the source? What about if you are using a build server, how do you tell MSTest where to look for the files? To the rescue is a property called" “%PathToWebRoot%” which is always right on the build server. It will always point to your build drop folder for your solutions web sites. Which will be “\\tfs.ssw.com.au\BuildDrop\[BuildName]\Debug\_PrecompiledWeb\” or whatever your build drop location is. So lets change the code above to add this. [AspNetDevelopmentServer("WebApp1", "%PathToWebRoot%\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Adding %PathToWebRoot% to the AspNetDevelopmentServer path makes it work everywhere. Now we have another problem… this will ONLY run on the build server and will fail locally as %PathToWebRoot%’s default value is “C:\Users\[profile]\Documents\Visual Studio 2010\Projects”. Well this sucks… How do we get the test to run on any build server and any developer laptop. Open “Tools | Options | Test Tools | Test Execution” in Visual Studio and you will see a field called “Web application root directory”. This is where you override that default above. Figure: You can override the default website location for tests. In my case I would put in “D:\Workspaces\SSW\SSW\SqlDeploy\DEV\Main” and all the developers working with this branch would put in the folder that they have mapped. Can you see a problem? What is I create a “$/SSW/SqlDeploy/DEV/34567” branch from Main and I want to run tests in there. Well… I would have to change the value above. This is not ideal, but as you can put your projects anywhere on a computer, it has to be done. Conclusion Although this looks convoluted and complicated there are real problems being solved here that mean that you have a test ANYWHERE solution. Any build server, any Developer workstation. Resources: http://billwg.blogspot.com/2009/06/testing-wcf-web-services.html http://tough-to-find.blogspot.com/2008/04/testing-asmx-web-services-in-visual.html http://msdn.microsoft.com/en-us/library/ms243399(VS.100).aspx http://blogs.msdn.com/dscruggs/archive/2008/09/29/web-tests-unit-tests-the-asp-net-development-server-and-code-coverage.aspx http://www.5z5.com/News/?543f8bc8b36b174f Technorati Tags: VS2010,MSTest,Team Build 2010,Team Build,Visual Studio,Visual Studio 2010,Visual Studio ALM,Team Test,Team Test 2010

    Read the article

  • Fixing Robocopy for SQL Server Jobs

    - by Most Valuable Yak (Rob Volk)
    Robocopy is one of, if not the, best life-saving/greatest-thing-since-sliced-bread command line utilities ever to come from Microsoft.  If you're not using it already, what are you waiting for? Of course, being a Microsoft product, it's not exactly perfect. ;)  Specifically, it sets the ERRORLEVEL to a non-zero value even if the copy is successful.  This causes a problem in SQL Server job steps, since non-zero ERRORLEVELs report as failed. You can work around this by having your SQL job go to the next step on failure, but then you can't determine if there was a genuine error.  Plus you still see annoying red X's in your job history.  One way I've found to avoid this is to use a batch file that runs Robocopy, and I add some commands after it (in red): robocopy d:\backups \\BackupServer\BackupFolder *.bak rem suppress successful robocopy exit statuses, only report genuine errors (bitmask 16 and 8 settings)set/A errlev="%ERRORLEVEL% & 24" rem exit batch file with errorlevel so SQL job can succeed or fail appropriatelyexit/B %errlev% (The REM statements are simply comments and don't need to be included in the batch file) The SET command lets you use expressions when you use the /A switch.  So I set an environment variable "errlev" to a bitwise AND with the ERRORLEVEL value. Robocopy's exit codes use a bitmap/bitmask to specify its exit status.  The bits for 1, 2, and 4 do not indicate any kind of failure, but 8 and 16 do.  So by adding 16 + 8 to get 24, and doing a bitwise AND, I suppress any of the other bits that might be set, and allow either or both of the error bits to pass. The next step is to use the EXIT command with the /B switch to set a new ERRORLEVEL value, using the "errlev" variable.  This will now return zero (unless Robocopy had real errors) and allow your SQL job step to report success. This technique should also work for other command-line utilities.  The only issues I've found is that it requires the commands to be part of a batch file, so if you use Robocopy directly in your SQL job step you'd need to place it in a batch.  If you also have multiple Robocopy calls, you'll need to place the SET/A command ONLY after the last one.  You'd therefore lose any errors from previous calls, unless you use multiple "errlev" variables and AND them together. (I'll leave this as an exercise for the reader) The SET/A syntax also permits other kinds of expressions to be calculated.  You can get a full list by running "SET /?" on a command prompt.

    Read the article

  • Looking for Your Next Challenge...Don't Stretch Too Far

    - by david.talamelli
    In my role as a Recruiter at Oracle I receive a large number of resumes of people who are interested in working with us. People contact me for a number of reasons, it can be about a specific role that we may be hiring for or they may send me an email asking if there are any suitable roles for them. Sometimes when I speak to people we have similar roles available to the roles that they may actually be in now. Sometimes people are interested in making this type of sideways move if their motivation to change jobs is not necessarily that they are looking for increased responsibility or career advancement (example: money, redundancy, work environment). However there are times when after walking through a specific role with a candidate that they may say to me - "You know that is very similar to the role that I am doing now. I would not want to move unless my next role presents me with the next challenge in my career". This is a far statement - if a person is looking to change jobs for the next step in their career they should be looking at suitable opportunities that will address their need. In this instance a sideways step will not really present any new challenges or responsibilities. The main change would be the company they are working for. Candidates looking for a new role because they are looking to move up the ladder should be looking for a role that offers them the next level of responsibility. I think the best job changes for people who are looking for career advancement are the roles that stretch someone outside of their comfort zone but do not stretch them so much that they can't cope with the added responsibilities and pressure. In my head I often think of this example in the same context of an elastic band - you can stretch it, but only so much before it snaps. That is what you should be looking for - to be stretched but not so much that you snap. If you are for example in an individual contributor role and would like to move into a management role - you may not be quite ready to take on a role that is managing a large workforce or requires significant people management experience. While your intentions may be right, your lack of management experience may fit you outside of the scope of search to be successful this type of role. In this example you can move from an individual contributor role to a management role but it may need to be managing a smaller team rather than a larger team. While you are trying to make this transition you can try to pick up some responsibilities in your current role that would give you the skills and experience you need for your next role. Never be afraid to put your hand up to help on a new project or piece of work. You never know when that newly gained experience may come in handy in your career. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Eclipse Check for Updates issue

    - by Nicholas Ryan Bowers
    I install Eclipse from the Software Center so it links up and will be updated with the rest of my software. Because I am developing for Android, however, I have to install the ADT Plugin within Eclipse by going to Help Install new software (or something to that effect). Now, I do understand that I can update Eclipse through the actual Ubuntu software center/system, but in order to update plugins and extensions within Eclipse, I have to go to Help Check for Updates (which then scans all plugins for updates). The only issue, is that when I installed through the software center, the owner became root, and whenever I run it without root, I'm not able to update - I get the error message "Insufficient access privileges to apply this update." When I run it as root, all of my plugins disappear, because I guess I installed them as myself, not as root. I tried to install the plugins as root, but the Install New Software choice would not work. Ubuntu 12.04 and Eclipse 3.7.2-1

    Read the article

  • WCF REST on .Net 4.0

    - by AngelEyes
    A simple and straight forward article taken from: http://christopherdeweese.com/blog2/post/drop-the-soap-wcf-rest-and-pretty-uris-in-net-4 Drop the Soap: WCF, REST, and Pretty URIs in .NET 4 Years ago I was working in libraries when the Web 2.0 revolution began.  One of the things that caught my attention about early start-ups using the AJAX/REST/Web 2.0 model was how nice the URIs were for their applications.  Those were my first impressions of REST; pretty URIs.  Turns out there is a little more to it than that. REST is an architectural style that focuses on resources and structured ways to access those resources via the web.  REST evolved as an “anti-SOAP” movement, driven by developers who did not want to deal with all the complexity SOAP introduces (which is al lot when you don’t have frameworks hiding it all).  One of the biggest benefits to REST is that browsers can talk to rest services directly because REST works using URIs, QueryStrings, Cookies, SSL, and all those HTTP verbs that we don’t have to think about anymore. If you are familiar with ASP.NET MVC then you have been exposed to rest at some level.  MVC is relies heavily on routing to generate consistent and clean URIs.  REST for WCF gives you the same type of feel for your services.  Let’s dive in. WCF REST in .NET 3.5 SP1 and .NET 4 This post will cover WCF REST in .NET 4 which drew heavily from the REST Starter Kit and community feedback.  There is basic REST support in .NET 3.5 SP1 and you can also grab the REST Starter Kit to enable some of the features you’ll find in .NET 4. This post will cover REST in .NET 4 and Visual Studio 2010. Getting Started To get started we’ll create a basic WCF Rest Service Application using the new on-line templates option in VS 2010: When you first install a template you are prompted with this dialog: Dude Where’s my .Svc File? The WCF REST template shows us the new way we can simply build services.  Before we talk about what’s there, let’s look at what is not there: The .Svc File An Interface Contract Dozens of lines of configuration that you have to change to make your service work REST in .NET 4 is greatly simplified and leverages the Web Routing capabilities used in ASP.NET MVC and other parts of the web frameworks.  With REST in .NET 4 you use a global.asax to set the route to your service using the new ServiceRoute class.  From there, the WCF runtime handles dispatching service calls to the methods based on the Uri Templates. global.asax using System; using System.ServiceModel.Activation; using System.Web; using System.Web.Routing; namespace Blog.WcfRest.TimeService {     public class Global : HttpApplication     {         void Application_Start(object sender, EventArgs e)         {             RegisterRoutes();         }         private static void RegisterRoutes()         {             RouteTable.Routes.Add(new ServiceRoute("TimeService",                 new WebServiceHostFactory(), typeof(TimeService)));         }     } } The web.config contains some new structures to support a configuration free deployment.  Note that this is the default config generated with the template.  I did not make any changes to web.config. web.config <?xml version="1.0"?> <configuration>   <system.web>     <compilation debug="true" targetFramework="4.0" />   </system.web>   <system.webServer>     <modules runAllManagedModulesForAllRequests="true">       <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule,            System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />     </modules>   </system.webServer>   <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>     <standardEndpoints>       <webHttpEndpoint>         <!--             Configure the WCF REST service base address via the global.asax.cs file and the default endpoint             via the attributes on the <standardEndpoint> element below         -->         <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true"/>       </webHttpEndpoint>     </standardEndpoints>   </system.serviceModel> </configuration> Building the Time Service We’ll create a simple “TimeService” that will return the current time.  Let’s start with the following code: using System; using System.ServiceModel; using System.ServiceModel.Activation; using System.ServiceModel.Web; namespace Blog.WcfRest.TimeService {     [ServiceContract]     [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]     [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]     public class TimeService     {         [WebGet(UriTemplate = "CurrentTime")]         public string CurrentTime()         {             return DateTime.Now.ToString();         }     } } The endpoint for this service will be http://[machinename]:[port]/TimeService.  To get the current time http://[machinename]:[port]/TimeService/CurrentTime will do the trick. The Results Are In Remember That Route In global.asax? Turns out it is pretty important.  When you set the route name, that defines the resource name starting after the host portion of the Uri. Help Pages in WCF 4 Another feature that came from the starter kit are the help pages.  To access the help pages simply append Help to the end of the service’s base Uri. Dropping the Soap Having dabbled with REST in the past and after using Soap for the last few years, the WCF 4 REST support is certainly refreshing.  I’m currently working on some REST implementations in .NET 3.5 and VS 2008 and am looking forward to working on REST in .NET 4 and VS 2010.

    Read the article

  • Wipe 12.04/Windows 7 dual boot setup and start fresh new 12.04

    - by dswhite85
    I have an Asus u56e laptop running dual boot setup, Windows 7 and 12.04. I had Windows 7 first. I was wondering what's the easiest way possible to erase/format my drive (500GB) so that it deletes Ubuntu and Windows 7 so I can reinstall 12.04 onto my whole drive? Does the Ubuntu Live CD help make this a possibility or is there something with gparted I have to tinker with? I've already got everything I need backed up. Any help would be much appreciated.

    Read the article

  • LEGO Lord of the Rings Cut Scenes Spliced into a Full Length Movie [Video]

    - by Jason Fitzpatrick
    If you take all the cut scenes from the LEGO Lord of the Rings video game and splice them end-to-end, the result is an hour and a half LEGO Lord of the Rings movie. Check out the full video here. Courtesy of SpaceTopGames, this mega splice includes every cut scene from the video game, weighs in at one hour and thirty one minutes, and actually works really well as a movie when strung all together. LEGO Lord of the Rings – All Cutscenes [via Freeware Genius] HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    Read the article

  • TFS 2008 Web Access Report 100 record limitation

    - by HosamKamel
    By default TFS 2008 Web Access has the limit of 100 record when you open any query in report mode. Even if you tried to export the query to excel or PDF you will only get first 100 record exported. To overcome this issue, you have to reconfigure this count in the web.config file Navigate to web access files C:\Program Files\Microsoft Visual Studio 2008 Team System Web Access\Wiwa Open web.config modify maxWorkitemsInReportList count to whatever count you need. You need to do modify the same configuration in web.config located here C:\Program Files\Microsoft Visual Studio 2008 Team System Web Access\Web  A full discussion thread exists here Team Foundation Server - Team System Web Access

    Read the article

  • Eclipse juno can not open with error " An error has occurred. See the log file",ubuntu 12.04

    - by ana
    I'm trying to lunch eclipse for first time ,I've download the package and installed it manually.here is the log file : !SESSION 2012-10-10 16:06:11.460 ----------------------------------------------- eclipse.buildId=M20120914-1800 java.fullversion=GNU libgcj 4.6.3 BootLoader constants: OS=linux, ARCH=x86_64, WS=gtk, NL=en_US Command-line arguments: -os linux -ws gtk -arch x86_64 !ENTRY org.eclipse.osgi 4 0 2012-10-10 16:06:19.756 !MESSAGE Could not start bundle: org.eclipse.equinox.console !STACK 0 org.osgi.framework.BundleException: Could not start bundle: org.eclipse.equinox.console at org.eclipse.osgi.framework.internal.core.ConsoleManager.checkForConsoleBundle(ConsoleManager.java:217) at org.eclipse.core.runtime.adaptor.EclipseStarter.startup(EclipseStarter.java:297) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:176) at java.lang.reflect.Method.invoke(libgcj.so.12) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:629) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584) at org.eclipse.equinox.launcher.Main.run(Main.java:1438) at org.eclipse.equinox.launcher.Main.main(Main.java:1414) Caused by: org.osgi.framework.BundleException: Exception in org.eclipse.equinox.console.command.adapter.Activator.start() of bundle org.eclipse.equinox.console. at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:734) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683) at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381) at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:300) at org.eclipse.osgi.framework.internal.core.ConsoleManager.checkForConsoleBundle(ConsoleManager.java:215) ...7 more Caused by: org.osgi.framework.BundleException: Exception in org.apache.felix.gogo.command.Activator.start() of bundle org.apache.felix.gogo.command. at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:734) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683) at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381) at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:300) at org.eclipse.equinox.console.command.adapter.Activator.startBundle(Activator.java:248) at org.eclipse.equinox.console.command.adapter.Activator.start(Activator.java:239) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711) at java.security.AccessController.doPrivileged(libgcj.so.12) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702) ...11 more Caused by: java.lang.NoClassDefFoundError: org.apache.felix.gogo.command.OBR at java.lang.Class.initializeClass(libgcj.so.12) at org.apache.felix.gogo.command.Activator.start(Activator.java:54) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711) at java.security.AccessController.doPrivileged(libgcj.so.12) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702) ...19 more Caused by: java.lang.ClassNotFoundException: org.apache.felix.bundlerepository.Repository at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:501) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at java.lang.ClassLoader.loadClass(libgcj.so.12) at java.lang.Class.initializeClass(libgcj.so.12) ...23 more Root exception: java.lang.NoClassDefFoundError: org.apache.felix.gogo.command.OBR at java.lang.Class.initializeClass(libgcj.so.12) at !ENTRY org.eclipse.osgi 2 0 2012-10-10 16:06:30.433 !MESSAGE The following is a complete list of bundles which are not resolved, see the prior log entry for the root cause if it exists: !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.433 !MESSAGE Bundle com.sun.el_2.2.0.v201108011116 [4] was not resolved. !SUBENTRY 2 com.sun.el 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.el_2.2.0. !SUBENTRY 2 com.sun.el 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.http_2.5.0. !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.434 !MESSAGE Bundle javax.el_2.2.0.v201108011116 [6] was not resolved. !SUBENTRY 2 javax.el 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet_2.5.0. !SUBENTRY 2 javax.el 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.http_2.5.0. !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.434 !MESSAGE Bundle javax.servlet_3.0.0.v201112011016 [8] was not resolved. !SUBENTRY 2 javax.servlet 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.434 !MESSAGE Bundle javax.servlet.jsp_2.2.0.v201112011158 [9] was not resolved. !SUBENTRY 2 javax.servlet.jsp 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.el_2.2.0. !SUBENTRY 2 javax.servlet.jsp 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 javax.servlet.jsp 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.http_2.6.0. !SUBENTRY 2 javax.servlet.jsp 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.434 !MESSAGE Bundle org.apache.jasper.glassfish_2.2.2.v201205150955 [21] was not resolved. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.el_2.2.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.descriptor_2.6.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.http_2.6.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.jsp_2.2.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.jsp.el_2.2.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.jsp.tagext_2.2.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing optionally imported package javax.tools_0.0.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.434 !MESSAGE Bundle org.eclipse.equinox.http.jetty_3.0.0.v20120522-1841 [91] was not resolved. !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet_[2.6.0,4.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package javax.servlet.http_[2.6.0,4.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.equinox.http.servlet_1.0.0. !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.http_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.io.bio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.io.nio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server.bio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server.handler_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server.nio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server.session_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server.ssl_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.servlet_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.util_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.util.component_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.util.log_[8.0.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.435 !MESSAGE Bundle org.eclipse.equinox.http.registry_1.1.200.v20120522-2049 [92] was not resolved. !SUBENTRY 2 org.eclipse.equinox.http.registry 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package javax.servlet_2.3.0. !SUBENTRY 2 org.eclipse.equinox.http.registry 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package javax.servlet.http_2.3.0. !SUBENTRY 2 org.eclipse.equinox.http.registry 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.435 !MESSAGE Bundle org.eclipse.equinox.http.servlet_1.1.300.v20120522-1841 [93] was not resolved. !SUBENTRY 2 org.eclipse.equinox.http.servlet 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package javax.servlet_[2.3.0,3.1.0). !SUBENTRY 2 org.eclipse.equinox.http.servlet 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing optionally imported package javax.servlet.annotation_2.6.0. !SUBENTRY 2 org.eclipse.equinox.http.servlet 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing optionally imported package javax.servlet.descriptor_2.6.0. !SUBENTRY 2 org.eclipse.equinox.http.servlet 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package javax.servlet.http_[2.3.0,3.1.0). !SUBENTRY 2 org.eclipse.equinox.http.servlet 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.436 !MESSAGE Bundle org.eclipse.equinox.jsp.jasper_1.0.400.v20120522-2049 [94] was not resolved. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet_[2.4.0,3.1.0). !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing optionally imported package javax.servlet.annotation_2.6.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing optionally imported package javax.servlet.descriptor_2.6.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet.http_[2.4.0,3.1.0). !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet.jsp_[2.0.0,2.3.0). !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package org.apache.jasper.servlet_[0.0.0,6.0.0). !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.436 !MESSAGE Bundle org.eclipse.equinox.jsp.jasper.registry_1.0.300.v20120522-2049 [95] was not resolved. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper.registry 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package org.eclipse.equinox.jsp.jasper_0.0.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper.registry 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet_2.4.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper.registry 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet.http_2.4.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper.registry 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.436 !MESSAGE Bundle org.eclipse.help.webapp_3.6.101.v20120717-130216 [135] was not resolved. !SUBENTRY 2 org.eclipse.help.webapp 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing required bundle org.eclipse.equinox.jsp.jasper.registry_1.0.100. !SUBENTRY 2 org.eclipse.help.webapp 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing required bundle org.eclipse.equinox.http.registry_1.0.200. !SUBENTRY 2 org.eclipse.help.webapp 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet_2.4.0. !SUBENTRY 2 org.eclipse.help.webapp 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet.http_2.4.0. !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.437 !MESSAGE Bundle org.eclipse.jdt.apt.pluggable.core_1.0.400.v20120522-1651 [139] was not resolved. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.tool_0.0.0. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.apt.dispatch_0.0.0. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.apt.model_0.0.0. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.apt.util_0.0.0. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.437 !MESSAGE Bundle org.eclipse.jdt.compiler.apt_1.0.500.v20120522-1651 [141] was not resolved. !SUBENTRY 2 org.eclipse.jdt.compiler.apt 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing optionally imported package org.eclipse.jdt.internal.compiler.tool_0.0.0. !SUBENTRY 2 org.eclipse.jdt.compiler.apt 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.437 !MESSAGE Bundle org.eclipse.jdt.compiler.tool_1.0.101.v20120522-1651 [142] was not resolved. !SUBENTRY 2 org.eclipse.jdt.compiler.tool 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.437 !MESSAGE Bundle org.eclipse.jetty.continuation_8.1.3.v20120522 [155] was not resolved. !SUBENTRY 2 org.eclipse.jetty.continuation 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 org.eclipse.jetty.continuation 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing optionally imported package org.mortbay.log_[6.1.0,7.0.0). !SUBENTRY 2 org.eclipse.jetty.continuation 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing optionally imported package org.mortbay.util.ajax_[6.1.0,7.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.437 !MESSAGE Bundle org.eclipse.jetty.http_8.1.3.v20120522 [156] was not resolved. !SUBENTRY 2 org.eclipse.jetty.http 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 org.eclipse.jetty.http 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package javax.servlet.http_2.6.0. !SUBENTRY 2 org.eclipse.jetty.http 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package org.eclipse.jetty.io_[8.1.0,9.0.0). org.eclipse.jetty.util.resource_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.http 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.util.ssl_[8.1.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.438 !MESSAGE Bundle org.eclipse.jetty.io_8.1.3.v20120522 [157] was not resolved. !SUBENTRY 2 org.eclipse.jetty.io 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.util_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.io 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.util.component_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.io 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.util.log_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.io 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.util.thread_[8.1.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.438 !MESSAGE Bundle org.eclipse.jetty.security_8.1.3.v20120522 [158] was not resolved. !SUBENTRY 2 org.eclipse.jetty.security 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 org.eclipse.jetty.security 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package javax.servlet.http_2.6.0. !SUBENTRY 2 org.eclipse.jetty.security 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.http_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.security 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.server_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.security 2 0 2012-10-10 16:06:30.438 org.eclipse.jetty.jmx_8.0.0. !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.439 !MESSAGE Missing imported package org.eclipse.jetty.security_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package org.eclipse.jetty.server_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package org.eclipse.jetty.server.handler_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package org.eclipse.jetty.server.nio_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package org.eclipse.jetty.server.session_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package org.eclipse.jetty.server.ssl_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.eclipse.jetty.util_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.eclipse.jetty.util.component_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.eclipse.jetty.util.log_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.eclipse.jetty.util.resource_[8.1.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.440 !MESSAGE Bundle org.eclipse.jetty.util_8.1.3.v20120522 [161] was not resolved. !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package javax.servlet.http_2.6.0. !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.slf4j_[1.5.0,2.0.0). !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.slf4j.helpers_[1.6.0,2.0.0). !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.slf4j.impl_[1.5.0,2.0.0). !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.slf4j.spi_[1.6.0,2.0.0). !ENTRY org.eclipse.osgi 4 0 2012-10-10 16:06:30.441 !MESSAGE Application error !STACK 1 java.lang.ArrayIndexOutOfBoundsException: 0 at org.eclipse.e4.core.internal.di.ConstructorRequestor.calcDependentObjects(ConstructorRequestor.java:79) at org.eclipse.e4.core.internal.di.Requestor.getDependentObjects(Requestor.java:143) at org.eclipse.e4.core.internal.di.InjectorImpl.resolveArgs(InjectorImpl.java:408) at org.eclipse.e4.core.internal.di.InjectorImpl.internalMake(InjectorImpl.java:312) at org.eclipse.e4.core.internal.di.InjectorImpl.make(InjectorImpl.java:240) at org.eclipse.e4.core.contexts.ContextInjectionFactory.make(ContextInjectionFactory.java:161) at org.eclipse.e4.ui.internal.workbench.swt.E4Application.createDefaultHeadlessContext(E4Application.java:420) at org.eclipse.e4.ui.internal.workbench.swt.E4Application.createDefaultContext(E4Application.java:434) at org.eclipse.e4.ui.internal.workbench.swt.E4Application.createE4Workbench(E4Application.java:182) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:557) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:543) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:124) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:353) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:180) at java.lang.reflect.Method.invoke(libgcj.so.12) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:629) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584) at org.eclipse.equinox.launcher.Main.run(Main.java:1438) at org.eclipse.equinox.launcher.Main.main(Main.java:1414) would you please help me with this?

    Read the article

  • SQL Server Editions and Integration Services

    The SQL Server 2005 and SQL Server 2008 product family has quite a few editions now, so what does this mean for SQL Server Integration Services? Starting from the bottom we have the free edition known as Express, and the entry level Workgroup edition, as well as the new Web edition. None of these three include the full SSIS product, but they do all include the SQL Server Import and Export Wizard, with access to basic data sources but nothing more, so for simple loading and extraction of data this should suffice. You will not be able to build packages though, this is just a one shot deal aimed at using the wizard on an ad-hoc basis. To get the full power of Integration Services you need to start with Standard edition. This includes the BI Development Studio, for building your own packages, and fully functional IDE integrated into Visual Studio. (You get the full VS 2005/2008 IDE with the product). All core functions will be available but with a restricted set of transformations and tasks. The SQL Server 2005 Features Comparison or Features Supported by the Editions of SQL Server 2008 describes standard edition as having basic transforms, compared to Enterprise which includes the advanced transforms. I think basic is a little harsh considering the power you get with Standard, but the advanced covers the truly ground-breaking capabilities of data mining, text mining and cleansing or fuzzy transforms. The power of performing these operations within your ETL pipeline should not be underestimated, but not all processes will require these capabilities, so it seems like a reasonable delineation. Thankfully there are no feature limitations or artificial governors within Standard compared to Enterprise. The same control flow and data flow engines underpin both editions, with the same configuration and deployment options allowing you to work seamlessly between environments and editions if using the common components. In fact there are no govenors at all in SSIS, so whilst the SQL Database engine is limited to 4 CPUs in Standard edition, SSIS is only limited by the base operating system. The advanced transforms only available with Enterprise edition: Data Mining Training Destination Data Mining Query Component Fuzzy Grouping Fuzzy Lookup Term Extraction Term Lookup Dimension Processing Destination Partition Processing Destination The advanced tasks only available with Enterprise edition: Data Mining Query Task So in summary, if you want SQL Server Integration Services, you need SQL Server Standard edition, and for the more advanced tasks and transforms you need SQL Server Enterprise edition. To recap, the answer to the often asked question is no, SQL Server Integration Services is not available in SQL Server Express or Workgroup editions.

    Read the article

  • Enhance GIMP’s Image Editing Power with Gimp Paint Studio

    - by Asian Angel
    Does your GIMP installation need a little super-charging? Using Gimp Paint Studio you can add a wonderful set of brushes, tools, and more to GIMP and take your work up to the next level. For our example we chose to install the beta version of Gimp Paint Studio on Ubuntu 10.10. Once you download the .zip file and unzip it, all that you need to do is manually transfer the contents shown here to the appropriate GIMP folders on your system. You can see the location of the destination folders here on our system… Note: Make certain to make a back-up copy of the “sessionrc and toolrc files” before you transfer Gimp Paint Studio into your installation (in case you would like to or need to revert back to the originals later). When you finish transferring the files start GIMP up and get ready to have fun. And if your experience is like ours then you should see a noticeable difference in window size and arrangement from the default settings. Here are some samples of the exceptional artwork done by Ramon Miranda and Mozart Couto using Gimp Paint Studio. Really impressive! Artwork by Ramon Miranda & Mozart Couto. Watch the introduction video and see Gimp Paint Studio in action. Download Gimp Paint Studio for Linux, Windows, and Mac [Gimp Paint Studio Homepage] *Keep in mind that there are stable and beta releases available, so choose the version that you are most comfortable with using. View the Installation Guides for Gimp Paint Studio *Page contains wonderful “video and written” versions for adding/installing Gimp Paint Studio to your system. Gimp Paint Studio Video Tutorials Library Visit the Gimp Paint Studio Gallery Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Enhance GIMP’s Image Editing Power with Gimp Paint Studio Reclaim Vertical UI Space by Moving Your Tabs to the Side in Firefox Wind and Water: Puzzle Battles – An Awesome Game for Linux and Windows How Star Wars Changed the World [Infographic] Tabs Visual Manager Adds Thumbnailed Tab Switching to Chrome Daisies and Rye Swaying in the Summer Wind Wallpaper

    Read the article

  • Why does my laptop resume immediately after suspend?

    - by Igor Zinov'yev
    I seem to be having some problem with suspend mode. Every time I try to suspend my laptop, it just locks the screen. Or maybe it successfully suspends just to resume only an instant after. What could cause such a behaviour? I'm running 32-bit Ubuntu 12.04 with the 3.2.0-25 kernel on a HP dv5-1178er Pavilion laptop (Intel Core 2 Duo). Here are the relevant log sections: kern.log: Jun 1 10:42:21 igor-laptop kernel: [ 2225.131171] PM: Syncing filesystems ... done. Jun 1 10:42:21 igor-laptop kernel: [ 2225.141222] PM: Preparing system for mem sleep Jun 1 10:42:21 igor-laptop kernel: [ 2225.141239] Freezing user space processes ... (elapsed 0.01 seconds) done. Jun 1 10:42:21 igor-laptop kernel: [ 2225.156171] Freezing remaining freezable tasks ... (elapsed 0.01 seconds) done. Jun 1 10:42:21 igor-laptop kernel: [ 2225.172139] PM: Entering mem sleep Jun 1 10:42:21 igor-laptop kernel: [ 2225.172169] Suspending console(s) (use no_console_suspend to debug) Jun 1 10:42:21 igor-laptop kernel: [ 2225.172895] sd 0:0:0:0: [sda] Synchronizing SCSI cache Jun 1 10:42:21 igor-laptop kernel: [ 2225.181767] sd 0:0:0:0: [sda] Stopping disk Jun 1 10:42:21 igor-laptop kernel: [ 2225.251089] ene_ir 00:0a: wake-up capability enabled by ACPI Jun 1 10:42:21 igor-laptop kernel: [ 2225.251115] i8042 aux 00:09: wake-up capability disabled by ACPI Jun 1 10:42:21 igor-laptop kernel: [ 2225.251133] i8042 kbd 00:08: wake-up capability enabled by ACPI Jun 1 10:42:21 igor-laptop kernel: [ 2225.251286] jmb38x_ms 0000:06:00.3: PCI INT A disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.252491] sdhci-pci 0000:06:00.1: PCI INT A disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.264130] uhci_hcd 0000:00:1d.2: PCI INT D disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.264142] uhci_hcd 0000:00:1d.1: PCI INT B disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.264325] uhci_hcd 0000:00:1a.1: PCI INT B disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.288059] uhci_hcd 0000:00:1a.0: PCI INT A disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.288097] uhci_hcd 0000:00:1d.3: PCI INT C disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.288135] uhci_hcd 0000:00:1d.0: PCI INT A disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.316051] ehci_hcd 0000:00:1d.7: PCI INT A disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.316068] ehci_hcd 0000:00:1a.7: PCI INT D disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.522872] PM: suspend of drv:sd dev:0:0:0:0 complete after 349.979 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.522901] PM: suspend of drv:scsi dev:target0:0:0 complete after 349.955 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.522927] PM: suspend of drv:scsi dev:host0 complete after 272.260 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.522969] ahci 0000:00:1f.2: BIOS update required for suspend/resume Jun 1 10:42:21 igor-laptop kernel: [ 2225.522976] pci_legacy_suspend(): ahci_pci_device_suspend+0x0/0x80 returns -5 Jun 1 10:42:21 igor-laptop kernel: [ 2225.522981] pm_op(): pci_pm_suspend+0x0/0x110 returns -5 Jun 1 10:42:21 igor-laptop kernel: [ 2225.522984] PM: suspend of drv:ahci dev:0000:00:1f.2 complete after 258.932 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.522987] PM: Device 0000:00:1f.2 failed to suspend async: error -5 Jun 1 10:42:21 igor-laptop kernel: [ 2225.576228] snd_hda_intel 0000:00:1b.0: PCI INT A disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.576270] ACPI handle has no context! Jun 1 10:42:21 igor-laptop kernel: [ 2225.592136] PM: suspend of drv:snd_hda_intel dev:0000:00:1b.0 complete after 327.889 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.592206] PM: Some devices failed to suspend Jun 1 10:42:21 igor-laptop kernel: [ 2225.592291] uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592298] uhci_hcd 0000:00:1a.0: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592325] usb usb3: root hub lost power or was reset Jun 1 10:42:21 igor-laptop kernel: [ 2225.592339] uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592345] uhci_hcd 0000:00:1a.1: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592371] usb usb4: root hub lost power or was reset Jun 1 10:42:21 igor-laptop kernel: [ 2225.592387] ehci_hcd 0000:00:1a.7: PCI INT D -> GSI 19 (level, low) -> IRQ 19 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592395] ehci_hcd 0000:00:1a.7: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592843] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592851] uhci_hcd 0000:00:1d.0: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592854] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592863] uhci_hcd 0000:00:1d.1: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592878] usb usb5: root hub lost power or was reset Jun 1 10:42:21 igor-laptop kernel: [ 2225.592892] usb usb6: root hub lost power or was reset Jun 1 10:42:21 igor-laptop kernel: [ 2225.592895] uhci_hcd 0000:00:1d.2: PCI INT D -> GSI 16 (level, low) -> IRQ 16 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592903] uhci_hcd 0000:00:1d.2: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592906] uhci_hcd 0000:00:1d.3: PCI INT C -> GSI 18 (level, low) -> IRQ 18 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592915] uhci_hcd 0000:00:1d.3: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592930] usb usb7: root hub lost power or was reset Jun 1 10:42:21 igor-laptop kernel: [ 2225.592946] usb usb8: root hub lost power or was reset Jun 1 10:42:21 igor-laptop kernel: [ 2225.592949] ehci_hcd 0000:00:1d.7: PCI INT A -> GSI 20 (level, low) -> IRQ 20 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592957] ehci_hcd 0000:00:1d.7: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592963] pci 0000:00:1e.0: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.597106] sd 0:0:0:0: [sda] Starting disk Jun 1 10:42:21 igor-laptop kernel: [ 2225.608138] snd_hda_intel 0000:00:1b.0: BAR 0: set to [mem 0xdf300000-0xdf303fff 64bit] (PCI address [0xdf300000-0xdf303fff]) Jun 1 10:42:21 igor-laptop kernel: [ 2225.608180] snd_hda_intel 0000:00:1b.0: restoring config space at offset 0xf (was 0x100, writing 0x10b) Jun 1 10:42:21 igor-laptop kernel: [ 2225.608233] snd_hda_intel 0000:00:1b.0: restoring config space at offset 0x3 (was 0x0, writing 0x10) Jun 1 10:42:21 igor-laptop kernel: [ 2225.608248] snd_hda_intel 0000:00:1b.0: restoring config space at offset 0x1 (was 0x100000, writing 0x100002) Jun 1 10:42:21 igor-laptop kernel: [ 2225.608299] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22 Jun 1 10:42:21 igor-laptop kernel: [ 2225.608313] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.608420] snd_hda_intel 0000:00:1b.0: irq 50 for MSI/MSI-X Jun 1 10:42:21 igor-laptop kernel: [ 2225.612095] firewire_ohci 0000:06:00.0: restoring config space at offset 0x1 (was 0x100000, writing 0x100006) Jun 1 10:42:21 igor-laptop kernel: [ 2225.612181] sdhci-pci 0000:06:00.1: restoring config space at offset 0x1 (was 0x100003, writing 0x100007) Jun 1 10:42:21 igor-laptop kernel: [ 2225.612211] sdhci-pci 0000:06:00.1: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 1 10:42:21 igor-laptop kernel: [ 2225.612225] sdhci-pci 0000:06:00.1: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.612296] jmb38x_ms 0000:06:00.3: restoring config space at offset 0x1 (was 0x100003, writing 0x100007) Jun 1 10:42:21 igor-laptop kernel: [ 2225.612326] jmb38x_ms 0000:06:00.3: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 1 10:42:21 igor-laptop kernel: [ 2225.612332] jmb38x_ms 0000:06:00.3: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.699170] PM: resume of drv:uvcvideo dev:2-4:1.0 complete after 101.965 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.699179] PM: resume of drv:uvcvideo dev:2-4:1.1 complete after 101.932 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.699186] PM: resume of drv: dev:ep_00 complete after 101.917 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.699197] PM: resume of drv: dev:ep_83 complete after 101.972 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716148] PM: resume of drv:hub dev:3-0:1.0 complete after 119.543 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716155] PM: resume of drv: dev:ep_00 complete after 119.544 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716161] PM: resume of drv:hub dev:5-0:1.0 complete after 119.420 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716168] PM: resume of drv: dev:ep_00 complete after 119.381 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716174] PM: resume of drv:hub dev:8-0:1.0 complete after 119.141 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716181] PM: resume of drv: dev:ep_00 complete after 119.104 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716186] PM: resume of drv: dev:ep_81 complete after 119.579 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716191] PM: resume of drv: dev:ep_81 complete after 119.427 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716197] PM: resume of drv: dev:ep_81 complete after 119.143 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.747148] firewire_core: skipped bus generations, destroying all nodes Jun 1 10:42:21 igor-laptop kernel: [ 2225.776093] PM: resume of drv:hp_accel dev:HPQ0004:00 complete after 167.225 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.777243] i8042 kbd 00:08: wake-up capability disabled by ACPI Jun 1 10:42:21 igor-laptop kernel: [ 2225.777278] ene_ir 00:0a: wake-up capability disabled by ACPI Jun 1 10:42:21 igor-laptop kernel: [ 2225.820100] PM: resume of drv:hub dev:4-0:1.0 complete after 223.436 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820115] PM: resume of drv: dev:ep_00 complete after 223.444 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820123] PM: resume of drv: dev:ep_81 complete after 223.456 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820206] PM: resume of drv:hub dev:7-0:1.0 complete after 223.266 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820221] PM: resume of drv: dev:ep_81 complete after 223.260 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820238] PM: resume of drv: dev:ep_00 complete after 223.255 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820295] PM: resume of drv:hub dev:6-0:1.0 complete after 223.453 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820302] PM: resume of drv: dev:ep_00 complete after 223.415 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820321] PM: resume of drv: dev:ep_81 complete after 223.457 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.932108] usb 4-2: reset full-speed USB device number 2 using uhci_hcd Jun 1 10:42:21 igor-laptop kernel: [ 2226.086714] PM: resume of drv:usbhid dev:4-2:1.0 complete after 489.393 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.086728] PM: resume of drv: dev:ep_81 complete after 489.384 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.086745] PM: resume of drv: dev:ep_00 complete after 489.329 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.086753] PM: resume of drv:usbhid dev:4-2:1.1 complete after 489.384 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.086764] PM: resume of drv: dev:ep_82 complete after 489.373 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.180555] usb 7-2: reset full-speed USB device number 2 using uhci_hcd Jun 1 10:42:21 igor-laptop kernel: [ 2226.244858] firewire_core: rediscovered device fw0 Jun 1 10:42:21 igor-laptop kernel: [ 2226.335066] btusb 7-2:1.0: no reset_resume for driver btusb? Jun 1 10:42:21 igor-laptop kernel: [ 2226.335068] btusb 7-2:1.1: no reset_resume for driver btusb? Jun 1 10:42:21 igor-laptop kernel: [ 2226.432082] usb 6-1: reset full-speed USB device number 2 using uhci_hcd Jun 1 10:42:21 igor-laptop kernel: [ 2226.578280] PM: resume of drv:nvidia dev:0000:01:00.0 complete after 985.301 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584296] PM: resume of drv:usb dev:7-2:1.0 complete after 986.693 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584308] PM: resume of drv: dev:ep_00 complete after 986.452 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584311] PM: resume of drv:usb dev:7-2:1.1 complete after 986.616 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584315] PM: resume of drv:usb dev:7-2:1.3 complete after 986.483 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584320] PM: resume of drv:usb dev:7-2:1.2 complete after 986.556 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584328] PM: resume of drv: dev:ep_03 complete after 986.588 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584331] PM: resume of drv: dev:ep_81 complete after 986.704 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584334] PM: resume of drv: dev:ep_83 complete after 986.617 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584337] PM: resume of drv: dev:ep_82 complete after 986.688 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584340] PM: resume of drv: dev:ep_02 complete after 986.667 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584344] PM: resume of drv: dev:ep_84 complete after 986.558 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584352] PM: resume of drv: dev:ep_04 complete after 986.542 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.590883] PM: resume of drv: dev:ep_00 complete after 993.327 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.590887] PM: resume of drv:usb dev:6-1:1.0 complete after 993.424 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.590927] PM: resume of drv: dev:ep_82 complete after 993.395 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.590934] PM: resume of drv: dev:ep_81 complete after 993.426 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.590940] PM: resume of drv: dev:ep_01 complete after 993.456 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.592450] PM: resume of drv:sd dev:0:0:0:0 complete after 995.343 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.592461] PM: resume of drv:scsi_disk dev:0:0:0:0 complete after 802.688 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.592472] PM: resume of drv:scsi_device dev:0:0:0:0 complete after 995.324 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.600339] PM: resume of devices complete after 1008.129 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.601293] PM: resume devices took 1.008 seconds Jun 1 10:42:21 igor-laptop kernel: [ 2226.601330] PM: Finishing wakeup. Jun 1 10:42:21 igor-laptop kernel: [ 2226.601332] Restarting tasks ... done. Jun 1 10:42:21 igor-laptop kernel: [ 2226.625660] video LNXVIDEO:01: Restoring backlight state Jun 1 10:42:22 igor-laptop kernel: [ 2227.478921] iwlwifi 0000:02:00.0: L1 Disabled; Enabling L0S Jun 1 10:42:22 igor-laptop kernel: [ 2227.481981] iwlwifi 0000:02:00.0: Radio type=0x1-0x2-0x0 Jun 1 10:42:22 igor-laptop kernel: [ 2227.527727] ADDRCONF(NETDEV_UP): wlan0: link is not ready Jun 1 10:42:22 igor-laptop kernel: [ 2227.532468] r8169 0000:03:00.0: eth0: link down Jun 1 10:42:22 igor-laptop kernel: [ 2227.533967] ADDRCONF(NETDEV_UP): eth0: link is not ready pm_suspend.log: Fri Jun 1 10:42:14 MSK 2012: Running hooks for suspend. Running hook /usr/lib/pm-utils/sleep.d/000kernel-change suspend suspend: /usr/lib/pm-utils/sleep.d/000kernel-change suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/00logging suspend suspend: Linux igor-laptop 3.2.0-25-generic #40-Ubuntu SMP Wed May 23 20:33:05 UTC 2012 i686 i686 i386 GNU/Linux Module Size Used by pci_stub 12550 1 vboxpci 22882 0 vboxnetadp 13328 0 vboxnetflt 27211 0 vboxdrv 252189 3 vboxpci,vboxnetadp,vboxnetflt dm_crypt 22528 0 snd_hda_codec_hdmi 31775 1 snd_hda_codec_idt 60251 1 arc4 12473 2 hp_wmi 13652 0 sparse_keymap 13658 1 hp_wmi rfcomm 38139 12 snd_hda_intel 32765 5 snd_hda_codec 109562 3 snd_hda_codec_hdmi,snd_hda_codec_idt,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec bnep 17830 2 btusb 17912 2 bluetooth 158438 23 rfcomm,bnep,btusb joydev 17393 0 parport_pc 32114 0 snd_pcm 80845 4 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec ppdev 12849 0 uvcvideo 67203 0 binfmt_misc 17292 1 videodev 86588 1 uvcvideo snd_seq_midi 13132 0 snd_rawmidi 25424 1 snd_seq_midi nvidia 10958194 43 snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event ir_lirc_codec 12739 0 lirc_dev 18700 1 ir_lirc_codec snd_timer 28931 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq ir_mce_kbd_decoder 12681 0 ir_sony_decoder 12462 0 ir_jvc_decoder 12459 0 ir_rc6_decoder 12459 0 psmouse 87213 0 ir_rc5_decoder 12459 0 serio_raw 13027 0 iwlwifi 287934 0 rc_rc6_mce 12454 0 ir_nec_decoder 12459 0 ene_ir 18019 0 rc_core 21263 10 ir_lirc_codec,ir_mce_kbd_decoder,ir_sony_decoder,ir_jvc_decoder,ir_rc6_decoder,ir_rc5_decoder,rc_rc6_mce,ir_nec_decoder,ene_ir mac80211 436455 1 iwlwifi snd 62064 19 snd_hda_codec_hdmi,snd_hda_codec_idt,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device cfg80211 178679 2 iwlwifi,mac80211 hp_accel 25728 0 lis3lv02d 19268 1 hp_accel input_polldev 13648 1 lis3lv02d mac_hid 13077 0 wmi 18744 1 hp_wmi jmb38x_ms 17406 0 soundcore 14635 1 snd snd_page_alloc 14115 2 snd_hda_intel,snd_pcm memstick 15857 1 jmb38x_ms firewire_sbp2 18346 0 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp vesafb 13516 1 usbhid 41906 0 hid 77367 1 usbhid firewire_ohci 40180 0 firewire_core 56906 2 firewire_sbp2,firewire_ohci crc_itu_t 12627 1 firewire_core sdhci_pci 18324 0 sdhci 28241 1 sdhci_pci r8169 56321 0 video 19068 0 total used free shared buffers cached Mem: 3095544 2364260 731284 0 159020 1280240 -/+ buffers/cache: 925000 2170544 Swap: 1718916 0 1718916 /usr/lib/pm-utils/sleep.d/00logging suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/00powersave suspend suspend: /usr/lib/pm-utils/sleep.d/00powersave suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/01PulseAudio suspend suspend: Welcome to PulseAudio! Use "help" for usage information. >>> >>> Welcome to PulseAudio! Use "help" for usage information. >>> >>> Welcome to PulseAudio! Use "help" for usage information. >>> >>> /usr/lib/pm-utils/sleep.d/01PulseAudio suspend suspend: success. Running hook /etc/pm/sleep.d/10_grub-common suspend suspend: /etc/pm/sleep.d/10_grub-common suspend suspend: success. Running hook /etc/pm/sleep.d/10_unattended-upgrades-hibernate suspend suspend: /etc/pm/sleep.d/10_unattended-upgrades-hibernate suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/55NetworkManager suspend suspend: Having NetworkManager put all interaces to sleep...Failed. /usr/lib/pm-utils/sleep.d/55NetworkManager suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/60_wpa_supplicant suspend suspend: Failed to connect to wpa_supplicant - wpa_ctrl_open: No such file or directory /usr/lib/pm-utils/sleep.d/60_wpa_supplicant suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/75modules suspend suspend: /usr/lib/pm-utils/sleep.d/75modules suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/90clock suspend suspend: /usr/lib/pm-utils/sleep.d/90clock suspend suspend: not applicable. Running hook /usr/lib/pm-utils/sleep.d/94cpufreq suspend suspend: /usr/lib/pm-utils/sleep.d/94cpufreq suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/95anacron suspend suspend: stop: Unknown instance: /usr/lib/pm-utils/sleep.d/95anacron suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/95hdparm-apm suspend suspend: /usr/lib/pm-utils/sleep.d/95hdparm-apm suspend suspend: not applicable. Running hook /usr/lib/pm-utils/sleep.d/95led suspend suspend: /usr/lib/pm-utils/sleep.d/95led suspend suspend: not applicable. Running hook /usr/lib/pm-utils/sleep.d/98video-quirk-db-handler suspend suspend: nVidia binary video drive detected, not using quirks. /usr/lib/pm-utils/sleep.d/98video-quirk-db-handler suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/99video suspend suspend: kernel.acpi_video_flags = 0 /usr/lib/pm-utils/sleep.d/99video suspend suspend: success. Running hook /etc/pm/sleep.d/novatel_3g_suspend suspend suspend: /etc/pm/sleep.d/novatel_3g_suspend suspend suspend: success. Fri Jun 1 10:42:19 MSK 2012: performing suspend Fri Jun 1 10:42:21 MSK 2012: Awake. Fri Jun 1 10:42:21 MSK 2012: Running hooks for resume Running hook /etc/pm/sleep.d/novatel_3g_suspend resume suspend: /etc/pm/sleep.d/novatel_3g_suspend resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/99video resume suspend: /usr/lib/pm-utils/sleep.d/99video resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/98video-quirk-db-handler resume suspend: /usr/lib/pm-utils/sleep.d/98video-quirk-db-handler resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/95led resume suspend: /usr/lib/pm-utils/sleep.d/95led resume suspend: not applicable. Running hook /usr/lib/pm-utils/sleep.d/95hdparm-apm resume suspend: /dev/sda: setting Advanced Power Management level to 0xfe (254) APM_level = 254 /dev/sda: setting Advanced Power Management level to 0xfe (254) APM_level = 254 /usr/lib/pm-utils/sleep.d/95hdparm-apm resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/95anacron resume suspend: /usr/lib/pm-utils/sleep.d/95anacron resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/94cpufreq resume suspend: /usr/lib/pm-utils/sleep.d/94cpufreq resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/90clock resume suspend: /usr/lib/pm-utils/sleep.d/90clock resume suspend: not applicable. Running hook /usr/lib/pm-utils/sleep.d/75modules resume suspend: Reloaded unloaded modules. /usr/lib/pm-utils/sleep.d/75modules resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/60_wpa_supplicant resume suspend: Failed to connect to wpa_supplicant - wpa_ctrl_open: No such file or directory /usr/lib/pm-utils/sleep.d/60_wpa_supplicant resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/55NetworkManager resume suspend: Having NetworkManager wake interfaces back up...Failed. /usr/lib/pm-utils/sleep.d/55NetworkManager resume suspend: success. Running hook /etc/pm/sleep.d/10_unattended-upgrades-hibernate resume suspend: /etc/pm/sleep.d/10_unattended-upgrades-hibernate resume suspend: success. Running hook /etc/pm/sleep.d/10_grub-common resume suspend: /etc/pm/sleep.d/10_grub-common resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/01PulseAudio resume suspend: Welcome to PulseAudio! Use "help" for usage information. >>> >>> Welcome to PulseAudio! Use "help" for usage information. >>> >>> Welcome to PulseAudio! Use "help" for usage information. >>> >>> /usr/lib/pm-utils/sleep.d/01PulseAudio resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/00powersave resume suspend: /usr/lib/pm-utils/sleep.d/00powersave resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/00logging resume suspend: /usr/lib/pm-utils/sleep.d/00logging resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/000kernel-change resume suspend: /usr/lib/pm-utils/sleep.d/000kernel-change resume suspend: success. Fri Jun 1 10:42:22 MSK 2012: Finished.

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

  • Use a Windows 8-Like Task Manager in Windows 7, Vista, and XP

    - by Lori Kaufman
    One of the new features in Windows 8 is the improved Task Manager, which provides access to more information and settings. If you don’t want to upgrade, there is a way you can use a simple Windows 8-like Task Manager in Windows 7, Vista, or XP. The Windows 8 Metro Task Manager does not need to be installed. Simply download the .zip file (see the download link at the end of this article), extract the files, and double-click the Windows 8 Task Manager.exe file. A window displays a list of tasks currently running with the status of each task listed. To end a task, select the task in the list and click End Task. Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • How to integerate Skype in Messaging Menu with Skype-Wrapper?

    - by Tahir Akram
    I cant see skype-wrapper in unity dash (alt f2). So I run it from terminal and attach it with skype. But it only appears in menu, when I run it from terminal like tahir@StoneCode:~$ skype-wrapper Starting skype-wrapper /usr/lib/python2.7/dist-packages/gobject/constants.py:24: Warning: g_boxed_type_register_static: assertion `g_type_from_name (name) == 0' failed import gobject._gobject INFO: Initializing Skype API INFO: Waiting for Skype Process INFO: Attaching skype-wrapper to Skype process INFO: Attached complete When I quit the terminal, skype disappear from messaging menu. So I need to run skype-wrapper instead of skype and need to add it in startup? Or any other work around? I followed this tutorial. Restart also does not help. Thanks.

    Read the article

  • Set the Minimum and Maximum Tab Widths in Firefox without an Add-on

    - by Lori Kaufman
    If you tend to have a lot of tabs open in Firefox, there may be times when you can’t see all the tabs you have open, and you need to navigate among your tabs using the tab scrolling arrows. There are add-ons available for Firefox that will make multiple rows of tabs, such as Tab Utilities. However, this still may not be ideal, as it takes a lot of screen real estate when you have a lot of tabs open. There’s an easy way to set the width of the tabs, so they still display text or website icons, and, at the same time, allow more tabs to be visible. To change the width of the tabs, enter “about:config” in the address bar in Firefox and press Enter. HTG Explains: Do You Really Need to Defrag Your PC? Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive

    Read the article

  • mysql not starting

    - by Eiriks
    I have a server running on rackspace.com, it been running for about a year (collecting data for a project) and no problems. Now it seems mysql froze (could not connect either through ssh command line, remote app (sequel pro) or web (pages using the db just froze). I got a bit eager to fix this quick and rebooted the virtual server, running ubuntu 10.10. It is a small virtual LAMP server (10gig storage - I'm only using 1, 256mb ram -has not been a problem). Now after the reboot, I cannot get mysql to start again. service mysql status mysql stop/waiting I believe this just means mysql is not running. How do I get this running again? service mysql start start: Job failed to start No. Just typing 'mysql' gives: mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) There is a .sock file in this folder, 'ls -l' gives: srwxrwxrwx 1 mysql mysql 0 2012-12-01 17:20 mysqld.sock From googleing this for a while now, I see that many talk about the logfile and my.cnf. Logs Not sure witch ones I should look at. This log-file is empty: 'var/log/mysql/error.log', so is the 'var/log/mysql.err' and 'var/log/mysql.log'. my.cnf is located in '/etc/mysql' and looks like this. Can't see anything clearly wrong with it either. # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ I need the data in the database (so i'd like to avoid reinstalling), and I need it back up running again. All hint, tips and solutions are welcomed and appreciated.

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >