Search Results

Search found 42465 results on 1699 pages for 'xml simple'.

Page 488/1699 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • Elegent methods for caching search results from RESTful service?

    - by Paul
    I have a RESTful web service which I access from the browser using JavaScript. As an example, say that this web service returns a list of all the Message resources assigned to me when I send a GET request to /messages/me. For performance reasons, I'd like to cache this response so that I don't have to re-fetch it every time I visit my Manage Messages web page. The cached response would expire after 5 minutes. If a Message resource is created "behind my back", say by the system admin, it's possible that I won't know about it for up to 5 minutes, until the cached search response expires and is re-fetched. This is acceptable, because it creates no confusion for me. However if I create a new Message resource which I know should be part of the search response, it becomes confusing when it doesn't appear on my Manage Messages page immediately. In general, when I knowingly create/delete/update a resource that invalidates a cached search response, I need that cached response to be expired/flushed immediately. The core problem which I can't figure out: I see no simple way of connecting the task of creating/deleting/updating a resource with the task of expiring the appropriate cached responses. In this example it seems simple, I could manually expire the cached search response whenever I create/delete/update a(ny) Message resource. But in a more complex system, keeping track of which search responses to expire under what circumstances will get clumsy quickly. If someone could suggest a simple solution or some clarifying thoughts, I'd appreciate it.

    Read the article

  • 2 basic but interesting questions about .NET

    - by b-gen-jack-o-neill
    Hi, when I first saw C#, I thought this must be some joke. I was starting with programming in C. But in C# you could just drag and drop objects, and just write event code to them. It was so simple. Now, I still like C the most, becouse I am very attracted to the basic low level operations, and C is just next level of assembler, with few basic routines, so I like it very much. Even more becouse I write little apps for microcontrollers. But yeasterday I wrote very simple control program for my microcontroller based LED cube in asm, and I needed some way to simply create animation sequences to the Cube. So, I remembered C#. I have practically NO C# skills, but still I created simple program to make animation sequences in about hour with GUI, just with help of google and help of the embeded function descriptions in C#. So, to get to the point, is there some other reason then top speed, to use any other language than C#? I mean, it is so effective. I know that Java is a bit of similiar, but I expect C# to be more Windows effective since its directly from Microsoft. The second question is, what is the advantage of compiling into CIL, and than run by CLR, than directly compile it into machine code? I know that portability is one, but since C# is mainly for Windows, wouldn´t it be more powerfull to just compile it directly? Thanks.

    Read the article

  • Advance Query with Join

    - by user1462589
    I'm trying to convert a product table that contains all the detail of the product into separate tables in SQL. I've got everything done except for duplicated descriptor details. The problem I am having all the products have size/color/style/other that many other products contain. I want to only have one size or color descriptor for all the items and reuse the "ID" for all the product which I believe is a Parent key to the Product ID which is a ...Foreign Key. The only problem is that every descriptor would have multiple Foreign Keys assigned to it. So I was thinking on the fly just have it skip figuring out a Foreign Parent key for each descriptor and just check to see if that descriptor exist and if it does use its Key for the descriptor. Data Table PI Colo Sz OTHER 1 | Blue | 5 | Vintage 2 | Blue | 6 | Vintage 3 | Blac | 5 | Simple 4 | Blac | 6 | Simple =================================== Its destination table is this =================================== DI Description 1 | Blue 2 | Blac 3 | 5 4 | 6 6 | Vintage 7 | Simple ============================= Select Data.Table Unique.Data.Table.Colo Unique.Data.Table.Sz Unique.Data.Table.Other ======================================= Then the dual part of the questions after we create all the descriptors how to do a new query and assign the product ID to the descriptors. PI| DI 1 | 1 1 | 3 1 | 4 2 | 1 2 | 3 2 | 4 By figuring out how to do this I should be able to duplicate this pattern for all 300 + columns in the product. Some of these fields are 60+ characters large so its going to save a ton of space. Do I use a Array?

    Read the article

  • Elegant ways to print out a bunch of instance attributes in python 2.6?

    - by wds
    First some background. I'm parsing a simple file format, and wish to re-use the results in python code later, so I made a very simple class hierarchy and wrote the parser to construct objects from the original records in the text files I'm working from. At the same time I'd like to load the data into a legacy database, the loader files for which take a simple tab-separated format. The most straightforward way would be to just do something like: print "%s\t%s\t....".format(record.id, record.attr1, len(record.attr1), ...) Because there are so many columns to print out though, I thought I'd use the Template class to make it a bit easier to see what's what, i.e.: templ = Template("$id\t$attr1\t$attr1_len\t...") And I figured I could just use the record in place of the map used by a substitute call, with some additional keywords for derived values: print templ.substitute(record, attr1_len=len(record.attr1), ...) Unfortunately this fails, complaining that the record instance does not have an attribute __getitem__. So my question is twofold: do I need to implement __getitem__ and if so how? is there a more elegant way for something like this where you just need to output a bunch of attributes you already know the name for?

    Read the article

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

  • jQuery Time Entry with Time Navigation Keys

    - by Rick Strahl
    So, how do you display time values in your Web applications? Displaying date AND time values in applications is lot less standardized than date display only. While date input has become fairly universal with various date picker controls available, time entry continues to be a bit of a non-standardized. In my own applications I tend to use the jQuery UI DatePicker control for date entries and it works well for that. Here's an example: The date entry portion is well defined and it makes perfect sense to have a calendar pop up so you can pick a date from a rich UI when necessary. However, time values are much less obvious when it comes to displaying a UI or even just making time entries more useful. There are a slew of time picker controls available but other than adding some visual glitz, they are not really making time entry any easier. Part of the reason for this is that time entry is usually pretty simple. Clicking on a dropdown of any sort and selecting a value from a long scrolling list tends to take more user interaction than just typing 5 characters (7 if am/pm is used). Keystrokes can make Time Entry easier Time entry maybe pretty simple, but I find that adding a few hotkeys to handle date navigation can make it much easier. Specifically it'd be nice to have keys to: Jump to the current time (Now) Increase/decrease minutes Increase/decrease hours The timeKeys jQuery PlugIn Some time ago I created a small plugin to handle this scenario. It's non-visual other than tooltip that pops up when you press ? to display the hotkeys that are available: Try it Online The keys loosely follow the ancient Quicken convention of using the first and last letters of what you're increasing decreasing (ie. H to decrease, R to increase hours and + and - for the base unit or minutes here). All navigation happens via the keystrokes shown above, so it's all non-visual, which I think is the most efficient way to deal with dates. To hook up the plug-in, start with the textbox:<input type="text" id="txtTime" name="txtTime" value="12:05 pm" title="press ? for time options" /> Note the title which might be useful to alert people using the field that additional functionality is available. To hook up the plugin code is as simple as:$("#txtTime").timeKeys(); You essentially tie the plugin to any text box control. OptionsThe syntax for timeKeys allows for an options map parameter:$(selector).timeKeys(options); Options are passed as a parameter map object which can have the following properties: timeFormatYou can pass in a format string that allows you to format the date. The default is "hh:mm t" which is US time format that shows a 12 hour clock with am/pm. Alternately you can pass in "HH:mm" which uses 24 hour time. HH, hh, mm and t are translated in the format string - you can arrange the format as you see fit. callbackYou can also specify a callback function that is called when the date value has been set. This allows you to either re-format the date or perform post processing (such as displaying highlight if it's after a certain hour for example). Here's another example that uses both options:$("#txtTime").timeKeys({ timeFormat: "HH:mm", callback: function (time) { showStatus("new time is: " + time.toString() + " " + $(this).val() ); } }); The plugin code itself is fairly simple. It hooks the keydown event and checks for the various keys that affect time navigation which is straight forward. The bulk of the code however deals with parsing the time value and formatting the output using a Time class that implements parsing, formatting and time navigation methods. Here's the code for the timeKeys jQuery plug-in:/// <reference path="jquery.js" /> /// <reference path="ww.jquery.js" /> (function ($) { $.fn.timeKeys = function (options) { /// <summary> /// Attaches a set of hotkeys to time fields /// + Add minute - subtract minute /// H Subtract Hour R Add houR /// ? Show keys /// </summary> /// <param name="options" type="object"> /// Options: /// timeFormat: "hh:mm t" by default HH:mm alternate /// callback: callback handler after time assignment /// </param> /// <example> /// var proxy = new ServiceProxy("JsonStockService.svc/"); /// proxy.invoke("GetStockQuote",{symbol:"msft"},function(quote) { alert(result.LastPrice); },onPageError); ///</example> if (this.length < 1) return this; var opt = { timeFormat: "hh:mm t", callback: null } $.extend(opt, options); return this.keydown(function (e) { var $el = $(this); var time = new Time($el.val()); //alert($(this).val() + " " + time.toString() + " " + time.date.toString()); switch (e.keyCode) { case 78: // [N]ow time = new Time(new Date()); break; case 109: case 189: // - time.addMinutes(-1); break; case 107: case 187: // + time.addMinutes(1); break; case 72: //H time.addHours(-1); break; case 82: //R time.addHours(1); break; case 191: // ? if (e.shiftKey) $(this).tooltip("<b>N</b> Now<br/><b>+</b> add minute<br /><b>-</b> subtract minute<br /><b>H</b> Subtract Hour<br /><b>R</b> add hour", 4000, { isHtml: true }); return false; default: return true; } $el.val(time.toString(opt.timeFormat)); if (opt.callback) { // call async and set context in this element setTimeout(function () { opt.callback.call($el.get(0), time) }, 1); } return false; }); } Time = function (time, format) { /// <summary> /// Time object that can parse and format /// a time values. /// </summary> /// <param name="time" type="object"> /// A time value as a string (12:15pm or 23:01), a Date object /// or time value. /// /// </param> /// <param name="format" type="string"> /// Time format string: /// HH:mm (23:01) /// hh:mm t (11:01 pm) /// </param> /// <example> /// var time = new Time( new Date()); /// time.addHours(5); /// time.addMinutes(10); /// var s = time.toString(); /// /// var time2 = new Time(s); // parse with constructor /// var t = time2.parse("10:15 pm"); // parse with .parse() method /// alert( t.hours + " " + t.mins + " " + t.ampm + " " + t.hours25) ///</example> var _I = this; this.date = new Date(); this.timeFormat = "hh:mm t"; if (format) this.timeFormat = format; this.parse = function (time) { /// <summary> /// Parses time value from a Date object, or string in format of: /// 12:12pm or 23:01 /// </summary> /// <param name="time" type="any"> /// A time value as a string (12:15pm or 23:01), a Date object /// or time value. /// /// </param> if (!time) return null; // Date if (time.getDate) { var t = {}; var d = time; t.hours24 = d.getHours(); t.mins = d.getMinutes(); t.ampm = "am"; if (t.hours24 > 11) { t.ampm = "pm"; if (t.hours24 > 12) t.hours = t.hours24 - 12; } time = t; } if (typeof (time) == "string") { var parts = time.split(":"); if (parts < 2) return null; var time = {}; time.hours = parts[0] * 1; time.hours24 = time.hours; time.mins = parts[1].toLowerCase(); if (time.mins.indexOf("am") > -1) { time.ampm = "am"; time.mins = time.mins.replace("am", ""); if (time.hours == 12) time.hours24 = 0; } else if (time.mins.indexOf("pm") > -1) { time.ampm = "pm"; time.mins = time.mins.replace("pm", ""); if (time.hours < 12) time.hours24 = time.hours + 12; } time.mins = time.mins * 1; } _I.date.setMinutes(time.mins); _I.date.setHours(time.hours24); return time; }; this.addMinutes = function (mins) { /// <summary> /// adds minutes to the internally stored time value. /// </summary> /// <param name="mins" type="number"> /// number of minutes to add to the date /// </param> _I.date.setMinutes(_I.date.getMinutes() + mins); } this.addHours = function (hours) { /// <summary> /// adds hours the internally stored time value. /// </summary> /// <param name="hours" type="number"> /// number of hours to add to the date /// </param> _I.date.setHours(_I.date.getHours() + hours); } this.getTime = function () { /// <summary> /// returns a time structure from the currently /// stored time value. /// Properties: hours, hours24, mins, ampm /// </summary> return new Time(new Date()); h } this.toString = function (format) { /// <summary> /// returns a short time string for the internal date /// formats: 12:12 pm or 23:12 /// </summary> /// <param name="format" type="string"> /// optional format string for date /// HH:mm, hh:mm t /// </param> if (!format) format = _I.timeFormat; var hours = _I.date.getHours(); if (format.indexOf("t") > -1) { if (hours > 11) format = format.replace("t", "pm") else format = format.replace("t", "am") } if (format.indexOf("HH") > -1) format = format.replace("HH", hours.toString().padL(2, "0")); if (format.indexOf("hh") > -1) { if (hours > 12) hours -= 12; if (hours == 0) hours = 12; format = format.replace("hh", hours.toString().padL(2, "0")); } if (format.indexOf("mm") > -1) format = format.replace("mm", _I.date.getMinutes().toString().padL(2, "0")); return format; } // construction if (time) this.time = this.parse(time); } String.prototype.padL = function (width, pad) { if (!width || width < 1) return this; if (!pad) pad = " "; var length = width - this.length if (length < 1) return this.substr(0, width); return (String.repeat(pad, length) + this).substr(0, width); } String.repeat = function (chr, count) { var str = ""; for (var x = 0; x < count; x++) { str += chr }; return str; } })(jQuery); The plugin consists of the actual plugin and the Time class which handles parsing and formatting of the time value via the .parse() and .toString() methods. Code like this always ends up taking up more effort than the actual logic unfortunately. There are libraries out there that can handle this like datejs or even ww.jquery.js (which is what I use) but to keep the code self contained for this post the plugin doesn't rely on external code. There's one optional exception: The code as is has one dependency on ww.jquery.js  for the tooltip plugin that provides the small popup for all the hotkeys available. You can replace that code with some other mechanism to display hotkeys or simply remove it since that behavior is optional. While we're at it: A jQuery dateKeys plugIn Although date entry tends to be much better served with drop down calendars to pick dates from, often it's also easier to pick dates using a few simple hotkeys. Navigation that uses + - for days and M and H for MontH navigation, Y and R for YeaR navigation are a quick way to enter dates without having to resort to using a mouse and clicking around to what you want to find. Note that this plugin does have a dependency on ww.jquery.js for the date formatting functionality.$.fn.dateKeys = function (options) { /// <summary> /// Attaches a set of hotkeys to date 'fields' /// + Add day - subtract day /// M Subtract Month H Add montH /// Y Subtract Year R Add yeaR /// ? Show keys /// </summary> /// <param name="options" type="object"> /// Options: /// dateFormat: "MM/dd/yyyy" by default "MMM dd, yyyy /// callback: callback handler after date assignment /// </param> /// <example> /// var proxy = new ServiceProxy("JsonStockService.svc/"); /// proxy.invoke("GetStockQuote",{symbol:"msft"},function(quote) { alert(result.LastPrice); },onPageError); ///</example> if (this.length < 1) return this; var opt = { dateFormat: "MM/dd/yyyy", callback: null }; $.extend(opt, options); return this.keydown(function (e) { var $el = $(this); var d = new Date($el.val()); if (!d) d = new Date(1900, 0, 1, 1, 1); var month = d.getMonth(); var year = d.getFullYear(); var day = d.getDate(); switch (e.keyCode) { case 84: // [T]oday d = new Date(); break; case 109: case 189: d = new Date(year, month, day - 1); break; case 107: case 187: d = new Date(year, month, day + 1); break; case 77: //M d = new Date(year, month - 1, day); break; case 72: //H d = new Date(year, month + 1, day); break; case 191: // ? if (e.shiftKey) $el.tooltip("<b>T</b> Today<br/><b>+</b> add day<br /><b>-</b> subtract day<br /><b>M</b> subtract Month<br /><b>H</b> add montH<br/><b>Y</b> subtract Year<br/><b>R</b> add yeaR", 5000, { isHtml: true }); return false; default: return true; } $el.val(d.formatDate(opt.dateFormat)); if (opt.callback) // call async setTimeout(function () { opt.callback.call($el.get(0),d); }, 10); return false; }); } The logic for this plugin is similar to the timeKeys plugin, but it's a little simpler as it tries to directly parse the date value from a string via new Date(inputString). As mentioned it also uses a helper function from ww.jquery.js to format dates which removes the logic to perform date formatting manually which again reduces the size of the code. And the Key is… I've been using both of these plugins in combination with the jQuery UI datepicker for datetime values and I've found that I rarely actually pop up the date picker any more. It's just so much more efficient to use the hotkeys to navigate dates. It's still nice to have the picker around though - it provides the expected behavior for date entry. For time values however I can't justify the UI overhead of a picker that doesn't make it any easier to pick a time. Most people know how to type in a time value and if they want shortcuts keystrokes easily beat out any pop up UI. Hopefully you'll find this as useful as I have found it for my code. Resources Online Sample Download Sample Project © Rick Strahl, West Wind Technologies, 2005-2011Posted in jQuery  HTML   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • More information on the Patch Tuesday updates for SQL Server

    - by AaronBertrand
    Last week, Microsoft released a series of patches for all supported versions of SQL Server (from SQL Server 2005 SP3 all the way to SQL Server 2008 R2). The reason for the patch against SQL Server installations is largely a client-side issue with the XML viewer application, and for SQL Server specifically, the exploit is limited to potential information disclosure. A very easy way to avoid exposure to this exploit is simply to never open a file with the .disco extension (these files are likely already...(read more)

    Read the article

  • ESB Toolkit 2.0 EndPointConfig (HTTPS with WCF-BasicHttp and the ESB Toolkit 2.0)

    - by Andy Morrison
    Earlier this week I had an ESB endpoint (Off-Ramp in ESB parlance) that I was sending to over http using WCF-BasicHttp.  I needed to switch the protocol to https: which I did by changing my UDDI Binding over to https:  No problem from a management perspective; however, when I tried to run the process I saw this exception: Event Type:                     Error Event Source:                BizTalk Server 2009 Event Category:            BizTalk Server 2009 Event ID:   5754 Date:                                    3/10/2010 Time:                                   2:58:23 PM User:                                    N/A Computer:                       XXXXXXXXX Description: A message sent to adapter "WCF-BasicHttp" on send port "SPDynamic.XXX.SR" with URI "https://XXXXXXXXX.com/XXXXXXX/whatever.asmx" is suspended.  Error details: System.ArgumentException: The provided URI scheme 'https' is invalid; expected 'http'. Parameter name: via    at System.ServiceModel.Channels.TransportChannelFactory`1.ValidateScheme(Uri via)    at System.ServiceModel.Channels.HttpChannelFactory.ValidateCreateChannelParameters(EndpointAddress remoteAddress, Uri via)    at System.ServiceModel.Channels.HttpChannelFactory.OnCreateChannel(EndpointAddress remoteAddress, Uri via)    at System.ServiceModel.Channels.ChannelFactoryBase`1.InternalCreateChannel(EndpointAddress address, Uri via)    at System.ServiceModel.Channels.ChannelFactoryBase`1.CreateChannel(EndpointAddress address, Uri via)    at System.ServiceModel.Channels.ServiceChannelFactory.ServiceChannelFactoryOverRequest.CreateInnerChannelBinder(EndpointAddress to, Uri via)    at System.ServiceModel.Channels.ServiceChannelFactory.CreateServiceChannel(EndpointAddress address, Uri via)    at System.ServiceModel.Channels.ServiceChannelFactory.CreateChannel(Type channelType, EndpointAddress address, Uri via)    at System.ServiceModel.ChannelFactory`1.CreateChannel(EndpointAddress address, Uri via)    at System.ServiceModel.ChannelFactory`1.CreateChannel()    at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.GetChannel[TChannel](IBaseMessage bizTalkMessage, ChannelFactory`1& cachedFactory)    at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.SendMessage(IBaseMessage bizTalkMessage)  MessageId:  {1170F4ED-550F-4F7E-B0E0-1EE92A25AB10}  InstanceID: {1640C6C6-CA9C-4746-AEB0-584FDF7BB61E} I knew from a previous experience that I likely needed to set the SecurityMode setting for my Send Port.  But how do you do this for a Dynamic port (which I was using since this is an ESB solution)? Within the UDDI portal you have to add an additional Instance Info to your Binding named: EndPointConfig  Then you have to set its value to:  SecurityMode=Transport Like this:    The EndPointConfig is how the ESB Toolkit 2.0 provides extensibility for the various transports.  To see what the key-value pair options are for a given transport, open up an itinerary and change one of your resolvers to a “static” resolver by setting the “Resolver Implementation” to Static.  Then select a “Transport Name” ”, for instance to WCF-BasicHttp.  At this point you can then click on the “EndPoint Configuration” property for to see an adapter/ramp specific properties dialog (key-value pairs.)    Here’s the dialog that popped up for WCF-BasicHttp:   I simply set the SecurityMode to Transport.  Please note that you will get different properties within the window depending on the Transport Name you select for the resolver. When you are done with your settings, export the itinerary to disk and find that xml; then find that resolver’s xml within that file.  It will look like endpointConfig=SecurityMode=Transport in this case.  Note that if you set additional properties you will have additional key-value pairs after endpointConfig= Copy that string and paste it into the UDDI portal for you Binding’s EndPointConfig Instance Info value.

    Read the article

  • dasBlog

    - by Daniel Moth
    Some people like blogging on a site that is completely managed by someone else (e.g. http://wordpress.com/) and others, like me, prefer hosting their own blog at their own domain. In the latter case you need to decide what blog engine to install on your web space to power your blog. There are many free blog engines to choose from (e.g. the one from http://wordpress.org/). If, like me, you want to use a blog engine that is based on the .NET platform you have many choices including BlogEngine.NET, Subtext and the one I picked: dasBlog. In this post I'll describe the steps I took to get going with the open source dasBlog (home page, source page). A. Installing First I installed dasBlog on my local Windows 7 machine where I have IIS7 installed. To install dasBlog, I started by clicking the "Install" button on its web gallery page. After that I went through configuration, theming and adding content as described below. Once I was happy that everything was working correctly on the local machine, I set this up on a hosting service. I went for a Windows IIS7 shared hosting 3 month Economy plan from GoDaddy. The dasBlog site lists a bunch of other hosts. You can read the installation instructions for dasBlog, and with GoDaddy I just had to click one button since it is available as part of their quick-install apps. With GoDaddy I had a previewdns option that allowed me to play around and preview my site before going live. B. Configuring After it was installed (on local machine and/or hosting provider), I followed the obvious steps to create an admin user and logged in. This displays an admin navigation bar with the following options: 1. Navigator Links: I decided I was not going to use this feature. I manage links on the side of my blog manually elsewhere as part of the theme. So, I deleted every entry on this page and ignored it thereafter. 2. Blogroll: Ditto - same comment as for Navigator Links. 3. Content Filters: I did not delete (or add) these, but I did ensure both checkboxes are not checked. I.e. I am not using this feature now, but I may return to it in the future. 4. Activity: This is a read-only view of various statistics. So nothing to configure here, but useful to come back to for complementary statistics to whatever other statistical package you use (e.g. free stats as part of the hosting and I also use feedburner for syndication stats). 5. Cross-posting: I did not need that, so I turned it off via the Configuration Settings discussed next. 6. Configuration Settings: This is where the bulk of the configuration for the blog takes place and they are stored in a single XML file: Site.Config file. There are truly self-explanatory options to pick for Basic Settings, Services Settings and Services to Ping, Syndication Settings (this is where you link to your feedburner name if you have one) and Mail to Weblog Settings (I keep this turned off). There are also "Xml Storage System Settings" (I keep this turned off), "OpenId Settings" (I allow OpenID commenters), "Spammer Settings" (Enable captcha, never show email addresses) and "Comment settings" (Enable comments, don't allow on older posts, don't allow html). There are also Appearance Settings (I checked the "Use Post Title for Permalink", replaced spaces with hyphen and unchecked the "Use Unique Title"). Finally, there are also Notification Settings, but they are a bit of hit and miss in my case, in that I don’t always get the emails (still investigating this). C. Adding Content You can add content via the "Add Entry" link on the admin navigation bar or by configuring the "Mail to Weblog" settings and sending email or, do what I've started doing, use Live Writer (also the team has a blog). Another way to add content is programmatically if, for example, you are migrating content from another blog (and I'll cover that in separate post sharing the code). What you should know is that all blog content (posts and comments) live in XML files in a folder called "content" under your dasBlog installation. D. Theming There is a very good guide about themes for dasBlog, there is also a similar guide with screenshots (scroll down to "So how do I create a theme") and the dasBlog macro reference. When you install dasBlog, there are many themes available; each theme is in its own folder (representing the folder name) under the themes folder. You may have noticed that you can switch between these via the "Appearance Settings" described above (look for the combobox after the Default Theme label). I created my own theme by copy-pasting an existing theme folder, renaming it and then switching to it as the default. I then opened the folder in Visual Studio and hacked around the HTML in the 3 files (itemTemplate, homeTemplate and dayTemplate). These files have a blogtemplate file extension, which I temporarily renamed to HTML as I was editing them. There is no more advice I can offer here as this is a matter of taste and the aforementioned links is all I used. Personally, I had salvaged the CSS (and structure) from my previous blog and wanted to make this one match it as closely as possible - I think I have succeeded. E. If you run into any issue with dasBlog... ...use your favorite search engine to find answers. Many bloggers have been using this engine for a while and have documented issues and workarounds over time. One such example is ScottHa's dasBlog category; another example is therightstuff where I "borrowed" the idea/macro for the outlook-style on-page navigation. If you don't find what you want through searching, try posting a question to the forums. Comments about this post welcome at the original blog.

    Read the article

  • Analyze your IIS Log Files - Favorite Log Parser Queries

    - by The Official Microsoft IIS Site
    The other day I was asked if I knew about a tool that would allow users to easily analyze the IIS Log Files, to process and look for specific data that could easily be automated. My recommendation was that if they were comfortable with using a SQL-like language that they should use Log Parser . Log Parser is a very powerful tool that provides a generic SQL-like language on top of many types of data like IIS Logs, Event Viewer entries, XML files, CSV files, File System and others; and it allows you...(read more)

    Read the article

  • CDN CNAMEs not resolving to customer origin

    - by Donald Jenkins
    I have set up an Edgecast CDN to mirror all my static content. Because I use the root of my domain (donaldjenkins.com) to host my main site—using Google Analytics which sets cookies—I've stored the corresponding static files in a separate cookieless domain (donaldjenkins.info) which is used only for this purpose. I've set it up (using this guide for general guidance), with the following structure, based on a combination of customer origin and CDN origin to make the most of the chosen short domain name and provide meaningful URLs: http://donaldjenkins.info:80 is set as the customer origin for the content stored in the CDN at directory http://wac.62E0.edgecastcdn.net/8062E0/donaldjenkins.info; I've then set up various subdomains of a separate domain, the conveniently-named cdn.dj, as CDN-origin Edge CNAMEs for each of the corresponding static content types: js.cdn.dj points to the origin directory http://wac.62E0.edgecastcdn.net/0062E0/donaldjenkins.info/js; css.cdn.dj points to the origin directory http://wac.62E0.edgecastcdn.net/0062E0/donaldjenkins.info/css; images.cdn.dj points to the origin directory http://wac.62E0.edgecastcdn.net/0062E0/donaldjenkins.info/images and so on. This results in some pretty nice, short, clear URLs. The DNS zone file for cdn.dj (yes, it's a real domain name registered in Djibouti) is set properly: cdn.dj 43200 IN A 205.186.157.162 css.cdn.dj 43200 IN CNAME wac.62E0.edgecastcdn.net. images.cdn.dj 43200 IN CNAME wac.62E0.edgecastcdn.net. js.cdn.dj 43200 IN CNAME wac.62E0.edgecastcdn.net. The DNS resolves to the Edgecast URL: $ host js.cdn.dj js.cdn.dj is an alias for wac.62E0.edgecastcdn.net. wac.62E0.edgecastcdn.net is an alias for gs1.wac.edgecastcdn.net. gs1.wac.edgecastcdn.net has address 93.184.220.20 But whenever I try to fetch a file in any of the directories to which the CNAME assets map, I get a 404: $ curl http://js.cdn.dj/combined.js <?xml version="1.0" encoding="iso-8859-1"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>404 - Not Found</title> </head> <body> <h1>404 - Not Found</h1> </body> </html> despite the fact that the corresponding customer origin file exists: $ curl http://donaldjenkins.info/js/combined.js fetches the content of the combined.js file. Yet it's been more than enough time for the DNS to propagate since I set up the CDN. There's obviously some glaring mistake in the above-described setup, and I'm a bit of a novice with CDNs—but any suggestions would be gratefully received.

    Read the article

  • Daily tech links for .net and related technologies - Mar 29-31, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - Mar 29-31, 2010 Web Development Querying the Future With Reactive Extensions - Phil Haack Creating an OData API for StackOverflow including XML and JSON in 30 minutes - Scott Hanselman MVC Automatic Menu - Nuri Halperin jqGrid for ASP.NET MVC - TriRand Team Foolproof Provides Contingent Data Annotation Validation for ASP.NET MVC 2 -Nick Riggs Using FubuMVC.UI in asp.net MVC : Getting started - Cannibal Coder Building A Custom ActionResult in MVC...(read more)

    Read the article

  • Daily tech links for .net and related technologies - Apr 5-7, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - Apr 5-7, 2010 Web Development HTML 5 is Born Old - Quake in HTML 5 Example Image Preview in ASP.NET MVC - Imran Advanced ASP.NET MVC 2 - Brad Wilson How to Serialize/Deserialize Complex XML in ASP.Net / C# - Impact Works Ban HTML comments from your pages and views - Bertrand Le Roy Measuring ASP.NET and SharePoint output cache - Gunnar Peipman Web Design Eye Candy vs. Bare-Bones in UI Design - Max Steenbergen Empathizing Color Psychology in Web...(read more)

    Read the article

  • Welcome to my geeks blog

    - by bconlon
    Hi and welcome! I'm Bazza and this is my geeks blog. I have 20 years Visual Studio mainly C++, MFC,  ATL and now, thankfully, C# and I am embarking on the new world (well new to me) of WPF, so I thought I would try and capture my successful...and not so successful...WPF experiences with the geek world. So where to start? WPF? What I know so far... From wiki..."Windows Presentation Foundation (or WPF) is a graphical subsystem for rendering user interfaces in Windows-based applications." Hmm, great but didn't MFC, ATL (my head hurt with that one), and .Net all have APIs to allow me to code against the Windows Graphical Device Interface (GDI)? "Rather than relying on the older GDI subsystem, WPF utilizes DirectX. WPF attempts to provide a consistent programming model for building applications and provides a separation between the user interface and the business logic." OK, different drawing code, same Windows and weren't we always taught to separate our UI, Business Layer and Data Access Layer? "WPF employs XAML, a derivative of XML, to define and link various UI elements. WPF applications can be deployed as standalone desktop programs, or hosted as an embedded object in a website." Cool, now we're getting somewhere. So when they say separation they really mean separation. The crux of this appears to be that you can have creative people writing the UI and making it attractive and intuitive to use, whist the geeks concentrate on writing the Business and Data Access stuff. XAML (eXtensible Application Markup Language) maps XML elements and attributes directly to Common Language Runtime (CLR) object instances, properties and events. True separation of the View and Model. WPF also provides logical separation of a control from its appearance. In a traditional Windows system, all Controls have a base class containing a Windows handle and each Control knows how to render itself. In WPF, the controls are more like those in a Web Browser using Cascading Style Sheet, they are not wrappers for standard Windows Controls. Instead, they have a default 'template' that defines a visual theme which can easily be replaced by a custom template. But it gets better. WPF concentrates heavily on Data Binding where the client can bind directly to data on the server. I think this concept was first introduced in 'Classic' Visual Basic, where you could bind a list directly to a data from an Access database, and you could do similar in ASP .Net. However, the WPF implementation is far superior than it's predecessors. There are also other technologies that I want to look at like LINQ and the Entity Framework, but that's all for now. #

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • An XEvent a Day (6 of 31) – Targets Week – asynchronous_file_target

    - by Jonathan Kehayias
    Yesterday’s post, Targets Week - ring_buffer , looked at the ring_buffer Target in Extended Events and how it outputs the raw Event data in an XML document.  Today I’m going to go over the details of the other Target in Extended Events that captures raw Event data, the asynchronous_file_target. What is the asynchronous_file_target? The asynchronous_file_target holds the raw format Event data in a proprietary binary file format that persists beyond server restarts and can be provided to another...(read more)

    Read the article

  • LinqToXML removing empty xmlns attributes &amp; adding attributes like xmlns:xsi, xsi:schemaLocation

    - by Rohit Gupta
    Suppose you need to generate the following XML: 1: <GenevaLoader xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 2: xsi:schemaLocation="http://www.advent.com/SchemaRevLevel401/Geneva masterschema.xsd" 3: xmlns="http://www.advent.com/SchemaRevLevel401/Geneva"> 4: <PriceRecords> 5: <PriceRecord> 6: </PriceRecord> 7: </PriceRecords> 8: </GenevaLoader> Normally you would write the following C# code to accomplish this: 1: const string ns = "http://www.advent.com/SchemaRevLevel401/Geneva"; 2: XNamespace xnsp = ns; 3: XNamespace xsi = XNamespace.Get("http://www.w3.org/2001/XMLSchema-instance"); 4:  5: XElement root = new XElement( xnsp + "GenevaLoader", 6: new XAttribute(XNamespace.Xmlns + "xsi", xsi.NamespaceName), 7: new XAttribute( xsi + "schemaLocation", "http://www.advent.com/SchemaRevLevel401/Geneva masterschema.xsd")); 8:  9: XElement priceRecords = new XElement("PriceRecords"); 10: root.Add(priceRecords); 11:  12: for(int i = 0; i < 3; i++) 13: { 14: XElement price = new XElement("PriceRecord"); 15: priceRecords.Add(price); 16: } 17:  18: doc.Save("geneva.xml"); The problem with this approach is that it adds a additional empty xmlns arrtribute on the “PriceRecords” element, like so : 1: <GenevaLoader xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.advent.com/SchemaRevLevel401/Geneva masterschema.xsd" xmlns="http://www.advent.com/SchemaRevLevel401/Geneva"> 2: <PriceRecords xmlns=""> 3: <PriceRecord> 4: </PriceRecord> 5: </PriceRecords> 6: </GenevaLoader> The solution is to add the xmlns NameSpace in code to each child and grandchild elements of the root element like so : 1: XElement priceRecords = new XElement( xnsp + "PriceRecords"); 2: root.Add(priceRecords); 3:  4: for(int i = 0; i < 3; i++) 5: { 6: XElement price = new XElement(xnsp + "PriceRecord"); 7: priceRecords.Add(price); 8: }

    Read the article

  • NHibernate Tools

    - by Ricardo Peres
    Felice Pollano is the author of a two great new tools for working with NHibernate: NH Workbench: an IDE for writing HQL queries against a model db2hbm: generation of .hbm.xml files from a database (currently only SQL Server, more to come) I suggest you give them a try and give Felix your feedback!

    Read the article

  • An XEvent a Day (7 of 31) – Targets Week – bucketizers

    - by Jonathan Kehayias
    Yesterday’s post, Targets Week - asynchronous_file_target , looked at the asynchronous_file_target Target in Extended Events and how it outputs the raw Event data in an XML document. Continuing with Targets week today, we’ll look at the bucketizer targets in Extended Events which can be used to group Events based on the Event data that is being returned. What is the bucketizer? The bucketizer performs grouping of Events as they are processed by the target into buckets based on the Event data and...(read more)

    Read the article

  • Using the HTML5 &lt;input type=&quot;file&quot; multiple=&quot;multiple&quot;&gt; Tag in ASP.NET

    - by Rick Strahl
    Per HTML5 spec the <input type="file" /> tag allows for multiple files to be picked from a single File upload button. This is actually a very subtle change that's very useful as it makes it much easier to send multiple files to the server without using complex uploader controls. Please understand though, that even though you can send multiple files using the <input type="file" /> tag, the process of how those files are sent hasn't really changed - there's still no progress information or other hooks that allow you to automatically make for a nicer upload experience without additional libraries or code. For that you will still need some sort of library (I'll post an example in my next blog post using plUpload). All the new features allow for is to make it easier to select multiple images from disk in one operation. Where you might have required many file upload controls before to upload several files, one File control can potentially do the job. How it works To create a file input box that allows with multiple file support you can simply do:<form method="post" enctype="multipart/form-data"> <label>Upload Images:</label> <input type="file" multiple="multiple" name="File1" id="File1" accept="image/*" /> <hr /> <input type="submit" id="btnUpload" value="Upload Images" /> </form> Now when the file open dialog pops up - depending on the browser and whether the browser supports it - you can pick multiple files. Here I'm using Firefox using the thumbnail preview I can easily pick images to upload on a form: Note that I can select multiple images in the dialog all of which get stored in the file textbox. The UI for this can be different in some browsers. For example Chrome displays 3 files selected as text next to the Browse… button when I choose three rather than showing any files in the textbox. Most other browsers display the standard file input box and display the multiple filenames as a comma delimited list in the textbox. Note that you can also specify the accept attribute in the <input> tag, which specifies a mime-type to specify what type of content to allow.Here I'm only allowing images (image/*) and the browser complies by just showing me image files to display. Likewise I could use text/* for all text formats registered on the machine or text/xml to only show XML files (which would include xml,xst,xsd etc.). Capturing Files on the Server with ASP.NET When you upload files to an ASP.NET server there are a couple of things to be aware of. When multiple files are uploaded from a single file control, they are assigned the same name. In other words if I select 3 files to upload on the File1 control shown above I get three file form variables named File1. This means I can't easily retrieve files by their name:HttpPostedFileBase file = Request.Files["File1"]; because there will be multiple files for a given name. The above only selects the first file. Instead you can only reliably retrieve files by their index. Below is an example I use in app to capture a number of images uploaded and store them into a database using a business object and EF 4.2.for (int i = 0; i < Request.Files.Count; i++) { HttpPostedFileBase file = Request.Files[i]; if (file.ContentLength == 0) continue; if (file.ContentLength > App.Configuration.MaxImageUploadSize) { ErrorDisplay.ShowError("File " + file.FileName + " is too large. Max upload size is: " + App.Configuration.MaxImageUploadSize); return View("UploadClassic",model); } var image = new ClassifiedsBusiness.Image(); var ms = new MemoryStream(16498); file.InputStream.CopyTo(ms); image.Entered = DateTime.Now; image.EntryId = model.Entry.Id; image.ContentType = "image/jpeg"; image.ImageData = ms.ToArray(); ms.Seek(0, SeekOrigin.Begin); // resize image if necessary and turn into jpeg Bitmap bmp = Imaging.ResizeImage(ms.ToArray(), App.Configuration.MaxImageWidth, App.Configuration.MaxImageHeight); ms.Close(); ms = new MemoryStream(); bmp.Save(ms,ImageFormat.Jpeg); image.ImageData = ms.ToArray(); bmp.Dispose(); ms.Close(); model.Entry.Images.Add(image); } This works great and also allows you to capture input from multiple input controls if you are dealing with browsers that don't support multiple file selections in the file upload control. The important thing here is that I iterate over the files by index, rather than using a foreach loop over the Request.Files collection. The files collection returns key name strings, rather than the actual files (who thought that was good idea at Microsoft?), and so that isn't going to work since you end up getting multiple keys with the same name. Instead a plain for loop has to be used to loop over all files. Another Option in ASP.NET MVC If you're using ASP.NET MVC you can use the code above as well, but you have yet another option to capture multiple uploaded files by using a parameter for your post action method.public ActionResult Save(HttpPostedFileBase[] file1) { foreach (var file in file1) { if (file.ContentLength < 0) continue; // do something with the file }} Note that in order for this to work you have to specify each posted file variable individually in the parameter list. This works great if you have a single file upload to deal with. You can also pass this in addition to your main model to separate out a ViewModel and a set of uploaded files:public ActionResult Edit(EntryViewModel model,HttpPostedFileBase[] uploadedFile) You can also make the uploaded files part of the ViewModel itself - just make sure you use the appropriate naming for the variable name in the HTML document (since there's Html.FileFor() extension). Browser Support You knew this was coming, right? The feature is really nice, but unfortunately not supported universally yet. Once again Internet Explorer is the problem: No shipping version of Internet Explorer supports multiple file uploads. IE10 supposedly will, but even IE9 does not. All other major browsers - Chrome, Firefox, Safari and Opera - support multi-file uploads in their latest versions. So how can you handle this? If you need to provide multiple file uploads you can simply add multiple file selection boxes and let people either select multiple files with a single upload file box or use multiples. Alternately you can do some browser detection and if IE is used simply show the extra file upload boxes. It's not ideal, but either one of these approaches makes life easier for folks that use a decent browser and leaves you with a functional interface for those that don't. Here's a UI I recently built as an alternate uploader with multiple file upload buttons: I say this is my 'alternate' uploader - for my primary uploader I continue to use an add-in solution. Specifically I use plUpload and I'll discuss how that's implemented in my next post. Although I think that plUpload (and many of the other packaged JavaScript upload solutions) are a better choice especially for large uploads, for simple one file uploads input boxes work well enough. The advantage of this solution is that it's very easy to handle on the server side. Any of the JavaScript controls require special handling for uploads which I'll also discuss in my next post.© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Change Tracking

    - by Ricardo Peres
    You may recall my last post on Change Data Control. This time I am going to talk about other option for tracking changes to tables on SQL Server: Change Tracking. The main differences between the two are: Change Tracking works with SQL Server 2008 Express Change Tracking does not require SQL Server Agent to be running Change Tracking does not keep the old values in case of an UPDATE or DELETE Change Data Capture uses an asynchronous process, so there is no overhead on each operation Change Data Capture requires more storage and processing Here's some code that illustrates it's usage: -- for demonstrative purposes, table Post of database Blog only contains two columns, PostId and Title -- enable change tracking for database Blog, for 2 days ALTER DATABASE Blog SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); -- enable change tracking for table Post ALTER TABLE Post ENABLE CHANGE_TRACKING WITH (TRACK_COLUMNS_UPDATED = ON); -- see current records on table Post SELECT * FROM Post SELECT * FROM sys.sysobjects WHERE name = 'Post' SELECT * FROM sys.sysdatabases WHERE name = 'Blog' -- confirm that table Post and database Blog are being change tracked SELECT * FROM sys.change_tracking_tables SELECT * FROM sys.change_tracking_databases -- see current version for table Post SELECT p.PostId, p.Title, c.SYS_CHANGE_VERSION, c.SYS_CHANGE_CONTEXT FROM Post AS p CROSS APPLY CHANGETABLE(VERSION Post, (PostId), (p.PostId)) AS c; -- update post UPDATE Post SET Title = 'First Post Title Changed' WHERE Title = 'First Post Title'; -- see current version for table Post SELECT p.PostId, p.Title, c.SYS_CHANGE_VERSION, c.SYS_CHANGE_CONTEXT FROM Post AS p CROSS APPLY CHANGETABLE(VERSION Post, (PostId), (p.PostId)) AS c; -- see changes since version 0 (initial) SELECT p.Title, c.PostId, SYS_CHANGE_VERSION, SYS_CHANGE_OPERATION, SYS_CHANGE_COLUMNS, SYS_CHANGE_CONTEXT FROM CHANGETABLE(CHANGES Post, 0) AS c LEFT OUTER JOIN Post AS p ON p.PostId = c.PostId; -- is column Title of table Post changed since version 0? SELECT CHANGE_TRACKING_IS_COLUMN_IN_MASK(COLUMNPROPERTY(OBJECT_ID('Post'), 'Title', 'ColumnId'), (SELECT SYS_CHANGE_COLUMNS FROM CHANGETABLE(CHANGES Post, 0) AS c)) -- get current version SELECT CHANGE_TRACKING_CURRENT_VERSION() -- disable change tracking for table Post ALTER TABLE Post DISABLE CHANGE_TRACKING; -- disable change tracking for database Blog ALTER DATABASE Blog SET CHANGE_TRACKING = OFF; You can read about the differences between the two options here. Choose the one that best suits your needs! SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.brushes.Xml.aliases = ['xml']; SyntaxHighlighter.all();

    Read the article

  • How to make your File Adapter pick only one file at a time from a location

    - by anirudh.pucha(at)oracle.com
    In SOA 11g, you use File adapter to read files from the given location.With this read operation it picks all the files at time.You want to configure File Adapters that it should pick one file at time from the given location with given polling interval.Solution :You set the "SingleThreadModel" and "MaxRaiseSize" properties for your file adapter. Edit the adapter's jca file and add the following properties:property name="SingleThreadModel" value="true"property name="MaxRaiseSize" value="1"You can set these properties also through jdeveloper, by opening composite.xml, selecting the adapter and then changing the properties through the properties panel.

    Read the article

  • apt-get fails to upgrade, install, remove etc

    - by Kieran Peat
    I upgraded from 11.10 to 12.04, had no issues that I noticed. Recently tried to install something via software center, but it was throwing errors. Changed to trying to sudo apt-get install instead but again no luck. I've genuinely tried as much as I know to fix this, but I can't so I figured I'd ask here. I've done sudo apt-get update successfully but sudo apt-get upgrade failed with... You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not installed libssl1.0.0 : Breaks: libssl1.0.0:i386 (!= 1.0.1-4ubuntu5.2) but 1.0.0e-2ubuntu4.6 is installed libssl1.0.0:i386 : Breaks: libssl1.0.0 (!= 1.0.0e-2ubuntu4.6) but 1.0.1-4ubuntu5.2 is installed E: Unmet dependencies. Try using -f. Using sudo apt-get -f install... The following packages were automatically installed and are no longer required: libgtkmm-2.4-1c2a libgtkhtml3.14-19 libglade2-0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libqtcore4:i386 libssl1.0.0:i386 The following NEW packages will be installed libqtcore4:i386 The following packages will be upgraded: libssl1.0.0:i386 1 upgraded, 1 newly installed, 0 to remove and 33 not upgraded. 20 not fully installed or removed. Need to get 0 B/3,063 kB of archives. After this operation, 9,044 kB of additional disk space will be used. Do you want to continue [Y/n]? y E: Internal Error, No file name for libssl1.0.0 I've tried sudo apt-get remove libssl1.0.0 and sudo apt-get remove libssl1.0.0:i386 Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not going to be installed Depends: libssl1.0.0:i386 but it is not going to be installed libcurl3:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not going to be installed libsasl2-modules:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). I've also tried sudo apt-get dist-upgrade, sudo apt-get autoremove etc without any luck. I also tried to download the .deb and use dpkg -i, but that failed and did not fully understand the method to be honest. Edit This is in response to the comments ref: sudo apt-get install -f doesn't fix broken packages. And now? sudo dpkg --configure -a --force-all dpkg: error processing libssl1.0.0 (--configure): libssl1.0.0:amd64 1.0.1-4ubuntu5.2 cannot be configured because libssl1.0.0:i386 is in a different version (1.0.0e-2ubuntu4.6) dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: too many errors, stopping Errors were encountered while processing: libssl1.0.0 libssl1.0.0:i386 ... libssl1.0.0:i386 Processing was halted because there were too many errors. Ref: Package manager doesn't work anymore moving /var/lib/kpkg/info/libssl.. kieran@kieran-EX58-UD3R:~$ sudo mv /var/lib/dpkg/info/libssl1.0.0:i386.postinst /var/lib/dpkg/info/libssl1.0.0:i386.postinst.bad kieran@kieran-EX58-UD3R:~$ sudo mv /var/lib/dpkg/info/libssl1.0.0:amd64.postinst /var/lib/dpkg/info/libssl1.0.0:amd64.postinst.bad kieran@kieran-EX58-UD3R:~$ sudo apt-get --reinstall install libssl Reading package lists... Done Building dependency tree Reading state information... Done Package libssl is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'libssl' has no installation candidate kieran@kieran-EX58-UD3R:~$ sudo apt-get --reinstall install libssl1.0.0 Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not going to be installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not going to be installed libssl1.0.0 : Breaks: libssl1.0.0:i386 (!= 1.0.1-4ubuntu5.2) but 1.0.0e-2ubuntu4.6 is to be installed libssl1.0.0:i386 : Breaks: libssl1.0.0 (!= 1.0.0e-2ubuntu4.6) but 1.0.1-4ubuntu5.2 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). kieran@kieran-EX58-UD3R:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libgtkmm-2.4-1c2a libgtkhtml3.14-19 libglade2-0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libqtcore4:i386 libssl1.0.0:i386 The following NEW packages will be installed libqtcore4:i386 The following packages will be upgraded: libssl1.0.0:i386 1 upgraded, 1 newly installed, 0 to remove and 58 not upgraded. 20 not fully installed or removed. Need to get 3,063 kB of archives. After this operation, 9,044 kB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main libssl1.0.0 i386 1.0.1-4ubuntu5.2 [1,002 kB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main libqtcore4 i386 4:4.8.1-0ubuntu4.1 [2,061 kB] Fetched 3,063 kB in 4s (731 kB/s) E: Internal Error, No file name for libssl1.0.0 ref: libssl Dependencies removing libssl1.0.0:i386 kieran@kieran-EX58-UD3R:~$ sudo apt-get remove libssl1.0.0:i386 Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not going to be installed Depends: libssl1.0.0:i386 but it is not going to be installed libcurl3:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not going to be installed libsasl2-modules:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

    Read the article

  • Is there an SO API which can fetch all Questions & Answers for a particluar Keywords

    - by user4203
    I am looking for an API which helps in fetching all the Questions & Answers from SO and other Stack Exchange sites only on a particular "keyword". Later using XML RPC these questions will be posted as blog post and answers to this post's answers. Just wondering whether it's possible with an API. One of my friend suggested that we should Scrape but i don't want screen scraping instead i am looking for API requests which should handle this.

    Read the article

  • VS 2010 Debugger Improvements (BreakPoints, DataTips, Import/Export)

    - by ScottGu
    This is the twenty-first in a series of blog posts I’m doing on the VS 2010 and .NET 4 release.  Today’s blog post covers a few of the nice usability improvements coming with the VS 2010 debugger.  The VS 2010 debugger has a ton of great new capabilities.  Features like Intellitrace (aka historical debugging), the new parallel/multithreaded debugging capabilities, and dump debuging support typically get a ton of (well deserved) buzz and attention when people talk about the debugging improvements with this release.  I’ll be doing blog posts in the future that demonstrate how to take advantage of them as well.  With today’s post, though, I thought I’d start off by covering a few small, but nice, debugger usability improvements that were also included with the VS 2010 release, and which I think you’ll find useful. Breakpoint Labels VS 2010 includes new support for better managing debugger breakpoints.  One particularly useful feature is called “Breakpoint Labels” – it enables much better grouping and filtering of breakpoints within a project or across a solution.  With previous releases of Visual Studio you had to manage each debugger breakpoint as a separate item. Managing each breakpoint separately can be a pain with large projects and for cases when you want to maintain “logical groups” of breakpoints that you turn on/off depending on what you are debugging.  Using the new VS 2010 “breakpoint labeling” feature you can now name these “groups” of breakpoints and manage them as a unit. Grouping Multiple Breakpoints Together using a Label Below is a screen-shot of the breakpoints window within Visual Studio 2010.  This lists all of the breakpoints defined within my solution (which in this case is the ASP.NET MVC 2 code base): The first and last breakpoint in the list above breaks into the debugger when a Controller instance is created or released by the ASP.NET MVC Framework. Using VS 2010, I can now select these two breakpoints, right-click, and then select the new “Edit labels…” menu command to give them a common label/name (making them easier to find and manage): Below is the dialog that appears when I select the “Edit labels” command.  We can use it to create a new string label for our breakpoints or select an existing one we have already defined.  In this case we’ll create a new label called “Lifetime Management” to describe what these two breakpoints cover: When we press the OK button our two selected breakpoints will be grouped under the newly created “Lifetime Management” label: Filtering/Sorting Breakpoints by Label We can use the “Search” combobox to quickly filter/sort breakpoints by label.  Below we are only showing those breakpoints with the “Lifetime Management” label: Toggling Breakpoints On/Off by Label We can also toggle sets of breakpoints on/off by label group.  We can simply filter by the label group, do a Ctrl-A to select all the breakpoints, and then enable/disable all of them with a single click: Importing/Exporting Breakpoints VS 2010 now supports importing/exporting breakpoints to XML files – which you can then pass off to another developer, attach to a bug report, or simply re-load later.  To export only a subset of breakpoints, you can filter by a particular label and then click the “Export breakpoint” button in the Breakpoints window: Above I’ve filtered my breakpoint list to only export two particular breakpoints (specific to a bug that I’m chasing down).  I can export these breakpoints to an XML file and then attach it to a bug report or email – which will enable another developer to easily setup the debugger in the correct state to investigate it on a separate machine.  Pinned DataTips Visual Studio 2010 also includes some nice new “DataTip pinning” features that enable you to better see and track variable and expression values when in the debugger.  Simply hover over a variable or expression within the debugger to expose its DataTip (which is a tooltip that displays its value)  – and then click the new “pin” button on it to make the DataTip always visible: You can “pin” any number of DataTips you want onto the screen.  In addition to pinning top-level variables, you can also drill into the sub-properties on variables and pin them as well.  Below I’ve “pinned” three variables: “category”, “Request.RawUrl” and “Request.LogonUserIdentity.Name”.  Note that these last two variable are sub-properties of the “Request” object.   Associating Comments with Pinned DataTips Hovering over a pinned DataTip exposes some additional UI within the debugger: Clicking the comment button at the bottom of this UI expands the DataTip - and allows you to optionally add a comment with it: This makes it really easy to attach and track debugging notes: Pinned DataTips are usable across both Debug Sessions and Visual Studio Sessions Pinned DataTips can be used across multiple debugger sessions.  This means that if you stop the debugger, make a code change, and then recompile and start a new debug session - any pinned DataTips will still be there, along with any comments you associate with them.  Pinned DataTips can also be used across multiple Visual Studio sessions.  This means that if you close your project, shutdown Visual Studio, and then later open the project up again – any pinned DataTips will still be there, along with any comments you associate with them. See the Value from Last Debug Session (Great Code Editor Feature) How many times have you ever stopped the debugger only to go back to your code and say: $#@! – what was the value of that variable again??? One of the nice things about pinned DataTips is that they keep track of their “last value from debug session” – and you can look these values up within the VB/C# code editor even when the debugger is no longer running.  DataTips are by default hidden when you are in the code editor and the debugger isn’t running.  On the left-hand margin of the code editor, though, you’ll find a push-pin for each pinned DataTip that you’ve previously setup: Hovering your mouse over a pinned DataTip will cause it to display on the screen.  Below you can see what happens when I hover over the first pin in the editor - it displays our debug session’s last values for the “Request” object DataTip along with the comment we associated with them: This makes it much easier to keep track of state and conditions as you toggle between code editing mode and debugging mode on your projects. Importing/Exporting Pinned DataTips As I mentioned earlier in this post, pinned DataTips are by default saved across Visual Studio sessions (you don’t need to do anything to enable this). VS 2010 also now supports importing/exporting pinned DataTips to XML files – which you can then pass off to other developers, attach to a bug report, or simply re-load later. Combined with the new support for importing/exporting breakpoints, this makes it much easier for multiple developers to share debugger configurations and collaborate across debug sessions. Summary Visual Studio 2010 includes a bunch of great new debugger features – both big and small.  Today’s post shared some of the nice debugger usability improvements. All of the features above are supported with the Visual Studio 2010 Professional edition (the Pinned DataTip features are also supported in the free Visual Studio 2010 Express Editions)  I’ll be covering some of the “big big” new debugging features like Intellitrace, parallel/multithreaded debugging, and dump file analysis in future blog posts.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >