Search Results

Search found 4492 results on 180 pages for 'register'.

Page 178/180 | < Previous Page | 174 175 176 177 178 179 180  | Next Page >

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

  • Rendering ASP.NET Script References into the Html Header

    - by Rick Strahl
    One thing that I’ve come to appreciate in control development in ASP.NET that use JavaScript is the ability to have more control over script and script include placement than ASP.NET provides natively. Specifically in ASP.NET you can use either the ClientScriptManager or ScriptManager to embed scripts and script references into pages via code. This works reasonably well, but the script references that get generated are generated into the HTML body and there’s very little operational control for placement of scripts. If you have multiple controls or several of the same control that need to place the same scripts onto the page it’s not difficult to end up with scripts that render in the wrong order and stop working correctly. This is especially critical if you load script libraries with dependencies either via resources or even if you are rendering referenced to CDN resources. Natively ASP.NET provides a host of methods that help embedding scripts into the page via either Page.ClientScript or the ASP.NET ScriptManager control (both with slightly different syntax): RegisterClientScriptBlock Renders a script block at the top of the HTML body and should be used for embedding callable functions/classes. RegisterStartupScript Renders a script block just prior to the </form> tag and should be used to for embedding code that should execute when the page is first loaded. Not recommended – use jQuery.ready() or equivalent load time routines. RegisterClientScriptInclude Embeds a reference to a script from a url into the page. RegisterClientScriptResource Embeds a reference to a Script from a resource file generating a long resource file string All 4 of these methods render their <script> tags into the HTML body. The script blocks give you a little bit of control by having a ‘top’ and ‘bottom’ of the document location which gives you some flexibility over script placement and precedence. Script includes and resource url unfortunately do not even get that much control – references are simply rendered into the page in the order of declaration. The ASP.NET ScriptManager control facilitates this task a little bit with the abililty to specify scripts in code and the ability to programmatically check what scripts have already been registered, but it doesn’t provide any more control over the script rendering process itself. Further the ScriptManager is a bear to deal with generically because generic code has to always check and see if it is actually present. Some time ago I posted a ClientScriptProxy class that helps with managing the latter process of sending script references either to ClientScript or ScriptManager if it’s available. Since I last posted about this there have been a number of improvements in this API, one of which is the ability to control placement of scripts and script includes in the page which I think is rather important and a missing feature in the ASP.NET native functionality. Handling ScriptRenderModes One of the big enhancements that I’ve come to rely on is the ability of the various script rendering functions described above to support rendering in multiple locations: /// <summary> /// Determines how scripts are included into the page /// </summary> public enum ScriptRenderModes { /// <summary> /// Inherits the setting from the control or from the ClientScript.DefaultScriptRenderMode /// </summary> Inherit, /// Renders the script include at the location of the control /// </summary> Inline, /// <summary> /// Renders the script include into the bottom of the header of the page /// </summary> Header, /// <summary> /// Renders the script include into the top of the header of the page /// </summary> HeaderTop, /// <summary> /// Uses ClientScript or ScriptManager to embed the script include to /// provide standard ASP.NET style rendering in the HTML body. /// </summary> Script, /// <summary> /// Renders script at the bottom of the page before the last Page.Controls /// literal control. Note this may result in unexpected behavior /// if /body and /html are not the last thing in the markup page. /// </summary> BottomOfPage } This enum is then applied to the various Register functions to allow more control over where scripts actually show up. Why is this useful? For me I often render scripts out of control resources and these scripts often include things like a JavaScript Library (jquery) and a few plug-ins. The order in which these can be loaded is critical so that jQuery.js always loads before any plug-in for example. Typically I end up with a general script layout like this: Core Libraries- HeaderTop Plug-ins: Header ScriptBlocks: Header or Script depending on other dependencies There’s also an option to render scripts and CSS at the very bottom of the page before the last Page control on the page which can be useful for speeding up page load when lots of scripts are loaded. The API syntax of the ClientScriptProxy methods is closely compatible with ScriptManager’s using static methods and control references to gain access to the page and embedding scripts. For example, to render some script into the current page in the header: // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Same again - shouldn't be rendered because it's the same id ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Create a second script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function2", "function helloWorld2() { alert('hello2'); }", true, ScriptRenderModes.Header); // This just calls ClientScript and renders into bottom of document ClientScriptProxy.Current.RegisterStartupScript(this,typeof(ControlResources), "call_hello", "helloWorld();helloWorld2();", true); which generates: <html xmlns="http://www.w3.org/1999/xhtml" > <head><title> </title> <script type="text/javascript"> function helloWorld() { alert('hello'); } </script> <script type="text/javascript"> function helloWorld2() { alert('hello2'); } </script> </head> <body> … <script type="text/javascript"> //<![CDATA[ helloWorld();helloWorld2();//]]> </script> </form> </body> </html> Note that the scripts are generated into the header rather than the body except for the last script block which is the call to RegisterStartupScript. In general I wouldn’t recommend using RegisterStartupScript – ever. It’s a much better practice to use a script base load event to handle ‘startup’ code that should fire when the page first loads. So instead of the code above I’d actually recommend doing: ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "call_hello", "$().ready( function() { alert('hello2'); });", true, ScriptRenderModes.Header); assuming you’re using jQuery on the page. For script includes from a Url the following demonstrates how to embed scripts into the header. This example injects a jQuery and jQuery.UI script reference from the Google CDN then checks each with a script block to ensure that it has loaded and if not loads it from a server local location: // load jquery from CDN ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js", ScriptRenderModes.HeaderTop); // check if jquery loaded - if it didn't we're not online string scriptCheck = @"if (typeof jQuery != 'object') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));"; string jQueryUrl = ClientScriptProxy.Current.GetWebResourceUrl(this, typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jquery_register", string.Format(scriptCheck,jQueryUrl),true, ScriptRenderModes.HeaderTop); // Load jquery-ui from cdn ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js", ScriptRenderModes.Header); // check if we need to load from local string jQueryUiUrl = ResolveUrl("~/scripts/jquery-ui-custom.min.js"); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jqueryui_register", string.Format(scriptCheck, jQueryUiUrl), true, ScriptRenderModes.Header); // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "$().ready( function() { alert('hello'); });", true, ScriptRenderModes.Header); which in turn generates this HTML: <html xmlns="http://www.w3.org/1999/xhtml" > <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=DIykvYhJ_oXCr-TA_dr35i4AayJoV1mgnQAQGPaZsoPM2LCdvoD3cIsRRitHKlKJfV5K_jQvylK7tsqO3lQIFw2&t=633979863959332352' type='text/javascript'%3E%3C/script%3E")); </script> <title> </title> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/scripts/jquery-ui-custom.min.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> $().ready(function() { alert('hello'); }); </script> </head> <body> …</body> </html> As you can see there’s a bit more control in this process as you can inject both script includes and script blocks into the document at the top or bottom of the header, plus if necessary at the usual body locations. This is quite useful especially if you create custom server controls that interoperate with script and have certain dependencies. The above is a good example of a useful switchable routine where you can switch where scripts load from by default – the above pulls from Google CDN but a configuration switch may automatically switch to pull from the local development copies if your doing development for example. How does it work? As mentioned the ClientScriptProxy object mimicks many of the ScriptManager script related methods and so provides close API compatibility with it although it contains many additional overloads that enhance functionality. It does however work against ScriptManager if it’s available on the page, or Page.ClientScript if it’s not so it provides a single unified frontend to script access. There are however many overloads of the original SM methods like the above to provide additional functionality. The implementation of script header rendering is pretty straight forward – as long as a server header (ie. it has to have runat=”server” set) is available. Otherwise these routines fall back to using the default document level insertions of ScriptManager/ClientScript. Given that there is a server header it’s relatively easy to generate the script tags and code and append them to the header either at the top or bottom. I suspect Microsoft didn’t provide header rendering functionality precisely because a runat=”server” header is not required by ASP.NET so behavior would be slightly unpredictable. That’s not really a problem for a custom implementation however. Here’s the RegisterClientScriptBlock implementation that takes a ScriptRenderModes parameter to allow header rendering: /// <summary> /// Renders client script block with the option of rendering the script block in /// the Html header /// /// For this to work Header must be defined as runat="server" /// </summary> /// <param name="control">any control that instance typically page</param> /// <param name="type">Type that identifies this rendering</param> /// <param name="key">unique script block id</param> /// <param name="script">The script code to render</param> /// <param name="addScriptTags">Ignored for header rendering used for all other insertions</param> /// <param name="renderMode">Where the block is rendered</param> public void RegisterClientScriptBlock(Control control, Type type, string key, string script, bool addScriptTags, ScriptRenderModes renderMode) { if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; if (control.Page.Header == null || renderMode != ScriptRenderModes.HeaderTop && renderMode != ScriptRenderModes.Header && renderMode != ScriptRenderModes.BottomOfPage) { RegisterClientScriptBlock(control, type, key, script, addScriptTags); return; } // No dupes - ref script include only once const string identifier = "scriptblock_"; if (HttpContext.Current.Items.Contains(identifier + key)) return; HttpContext.Current.Items.Add(identifier + key, string.Empty); StringBuilder sb = new StringBuilder(); // Embed in header sb.AppendLine("\r\n<script type=\"text/javascript\">"); sb.AppendLine(script); sb.AppendLine("</script>"); int? index = HttpContext.Current.Items["__ScriptResourceIndex"] as int?; if (index == null) index = 0; if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if(renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1,new LiteralControl(sb.ToString())); HttpContext.Current.Items["__ScriptResourceIndex"] = index; } Note that the routine has to keep track of items inserted by id so that if the same item is added again with the same key it won’t generate two script entries. Additionally the code has to keep track of how many insertions have been made at the top of the document so that entries are added in the proper order. The RegisterScriptInclude method is similar but there’s some additional logic in here to deal with script file references and ClientScriptProxy’s (optional) custom resource handler that provides script compression /// <summary> /// Registers a client script reference into the page with the option to specify /// the script location in the page /// </summary> /// <param name="control">Any control instance - typically page</param> /// <param name="type">Type that acts as qualifier (uniqueness)</param> /// <param name="url">the Url to the script resource</param> /// <param name="ScriptRenderModes">Determines where the script is rendered</param> public void RegisterClientScriptInclude(Control control, Type type, string url, ScriptRenderModes renderMode) { const string STR_ScriptResourceIndex = "__ScriptResourceIndex"; if (string.IsNullOrEmpty(url)) return; if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; // Extract just the script filename string fileId = null; // Check resource IDs and try to match to mapped file resources // Used to allow scripts not to be loaded more than once whether // embedded manually (script tag) or via resources with ClientScriptProxy if (url.Contains(".axd?r=")) { string res = HttpUtility.UrlDecode( StringUtils.ExtractString(url, "?r=", "&", false, true) ); foreach (ScriptResourceAlias item in ScriptResourceAliases) { if (item.Resource == res) { fileId = item.Alias + ".js"; break; } } if (fileId == null) fileId = url.ToLower(); } else fileId = Path.GetFileName(url).ToLower(); // No dupes - ref script include only once const string identifier = "script_"; if (HttpContext.Current.Items.Contains( identifier + fileId ) ) return; HttpContext.Current.Items.Add(identifier + fileId, string.Empty); // just use script manager or ClientScriptManager if (control.Page.Header == null || renderMode == ScriptRenderModes.Script || renderMode == ScriptRenderModes.Inline) { RegisterClientScriptInclude(control, type,url, url); return; } // Retrieve script index in header int? index = HttpContext.Current.Items[STR_ScriptResourceIndex] as int?; if (index == null) index = 0; StringBuilder sb = new StringBuilder(256); url = WebUtils.ResolveUrl(url); // Embed in header sb.AppendLine("\r\n<script src=\"" + url + "\" type=\"text/javascript\"></script>"); if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if (renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1, new LiteralControl(sb.ToString())); HttpContext.Current.Items[STR_ScriptResourceIndex] = index; } There’s a little more code here that deals with cleaning up the passed in Url and also some custom handling of script resources that run through the ScriptCompressionModule – any script resources loaded in this fashion are automatically cached based on the resource id. Raw urls extract just the filename from the URL and cache based on that. All of this to avoid doubling up of scripts if called multiple times by multiple instances of the same control for example or several controls that all load the same resources/includes. Finally RegisterClientScriptResource utilizes the previous method to wrap the WebResourceUrl as well as some custom functionality for the resource compression module: /// <summary> /// Returns a WebResource or ScriptResource URL for script resources that are to be /// embedded as script includes. /// </summary> /// <param name="control">Any control</param> /// <param name="type">A type in assembly where resources are located</param> /// <param name="resourceName">Name of the resource to load</param> /// <param name="renderMode">Determines where in the document the link is rendered</param> public void RegisterClientScriptResource(Control control, Type type, string resourceName, ScriptRenderModes renderMode) { string resourceUrl = GetClientScriptResourceUrl(control, type, resourceName); RegisterClientScriptInclude(control, type, resourceUrl, renderMode); } /// <summary> /// Works like GetWebResourceUrl but can be used with javascript resources /// to allow using of resource compression (if the module is loaded). /// </summary> /// <param name="control"></param> /// <param name="type"></param> /// <param name="resourceName"></param> /// <returns></returns> public string GetClientScriptResourceUrl(Control control, Type type, string resourceName) { #if IncludeScriptCompressionModuleSupport // If wwScriptCompression Module through Web.config is loaded use it to compress // script resources by using wcSC.axd Url the module intercepts if (ScriptCompressionModule.ScriptCompressionModuleActive) { string url = "~/wwSC.axd?r=" + HttpUtility.UrlEncode(resourceName); if (type.Assembly != GetType().Assembly) url += "&t=" + HttpUtility.UrlEncode(type.FullName); return WebUtils.ResolveUrl(url); } #endif return control.Page.ClientScript.GetWebResourceUrl(type, resourceName); } This code merely retrieves the resource URL and then simply calls back to RegisterClientScriptInclude with the URL to be embedded which means there’s nothing specific to deal with other than the custom compression module logic which is nice and easy. What else is there in ClientScriptProxy? ClientscriptProxy also provides a few other useful services beyond what I’ve already covered here: Transparent ScriptManager and ClientScript calls ClientScriptProxy includes a host of routines that help figure out whether a script manager is available or not and all functions in this class call the appropriate object – ScriptManager or ClientScript – that is available in the current page to ensure that scripts get embedded into pages properly. This is especially useful for control development where controls have no control over the scripting environment in place on the page. RegisterCssLink and RegisterCssResource Much like the script embedding functions these two methods allow embedding of CSS links. CSS links are appended to the header or to a form declared with runat=”server”. LoadControlScript Is a high level resource loading routine that can be used to easily switch between different script linking modes. It supports loading from a WebResource, a url or not loading anything at all. This is very useful if you build controls that deal with specification of resource urls/ids in a standard way. Check out the full Code You can check out the full code to the ClientScriptProxyClass here: ClientScriptProxy.cs ClientScriptProxy Documentation (class reference) Note that the ClientScriptProxy has a few dependencies in the West Wind Web Toolkit of which it is part of. ControlResources holds a few standard constants and script resource links and the ScriptCompressionModule which is referenced in a few of the script inclusion methods. There’s also another useful ScriptContainer companion control  to the ClientScriptProxy that allows scripts to be placed onto the page’s markup including the ability to specify the script location and script minification options. You can find all the dependencies in the West Wind Web Toolkit repository: West Wind Web Toolkit Repository West Wind Web Toolkit Home Page© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  JavaScript  

    Read the article

  • Creating Custom Ajax Control Toolkit Controls

    - by Stephen Walther
    The goal of this blog entry is to explain how you can extend the Ajax Control Toolkit with custom Ajax Control Toolkit controls. I describe how you can create the two halves of an Ajax Control Toolkit control: the server-side control extender and the client-side control behavior. Finally, I explain how you can use the new Ajax Control Toolkit control in a Web Forms page. At the end of this blog entry, there is a link to download a Visual Studio 2010 solution which contains the code for two Ajax Control Toolkit controls: SampleExtender and PopupHelpExtender. The SampleExtender contains the minimum skeleton for creating a new Ajax Control Toolkit control. You can use the SampleExtender as a starting point for your custom Ajax Control Toolkit controls. The PopupHelpExtender control is a super simple custom Ajax Control Toolkit control. This control extender displays a help message when you start typing into a TextBox control. The animated GIF below demonstrates what happens when you click into a TextBox which has been extended with the PopupHelp extender. Here’s a sample of a Web Forms page which uses the control: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ShowPopupHelp.aspx.cs" Inherits="MyACTControls.Web.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head runat="server"> <title>Show Popup Help</title> </head> <body> <form id="form1" runat="server"> <div> <act:ToolkitScriptManager ID="tsm" runat="server" /> <%-- Social Security Number --%> <asp:Label ID="lblSSN" Text="SSN:" AssociatedControlID="txtSSN" runat="server" /> <asp:TextBox ID="txtSSN" runat="server" /> <act:PopupHelpExtender id="ph1" TargetControlID="txtSSN" HelpText="Please enter your social security number." runat="server" /> <%-- Social Security Number --%> <asp:Label ID="lblPhone" Text="Phone Number:" AssociatedControlID="txtPhone" runat="server" /> <asp:TextBox ID="txtPhone" runat="server" /> <act:PopupHelpExtender id="ph2" TargetControlID="txtPhone" HelpText="Please enter your phone number." runat="server" /> </div> </form> </body> </html> In the page above, the PopupHelp extender is used to extend the functionality of the two TextBox controls. When focus is given to a TextBox control, the popup help message is displayed. An Ajax Control Toolkit control extender consists of two parts: a server-side control extender and a client-side behavior. For example, the PopupHelp extender consists of a server-side PopupHelpExtender control (PopupHelpExtender.cs) and a client-side PopupHelp behavior JavaScript script (PopupHelpBehavior.js). Over the course of this blog entry, I describe how you can create both the server-side extender and the client-side behavior. Writing the Server-Side Code Creating a Control Extender You create a control extender by creating a class that inherits from the abstract ExtenderControlBase class. For example, the PopupHelpExtender control is declared like this: public class PopupHelpExtender: ExtenderControlBase { } The ExtenderControlBase class is part of the Ajax Control Toolkit. This base class contains all of the common server properties and methods of every Ajax Control Toolkit extender control. The ExtenderControlBase class inherits from the ExtenderControl class. The ExtenderControl class is a standard class in the ASP.NET framework located in the System.Web.UI namespace. This class is responsible for generating a client-side behavior. The class generates a call to the Microsoft Ajax Library $create() method which looks like this: <script type="text/javascript"> $create(MyACTControls.PopupHelpBehavior, {"HelpText":"Please enter your social security number.","id":"ph1"}, null, null, $get("txtSSN")); }); </script> The JavaScript $create() method is part of the Microsoft Ajax Library. The reference for this method can be found here: http://msdn.microsoft.com/en-us/library/bb397487.aspx This method accepts the following parameters: type – The type of client behavior to create. The $create() method above creates a client PopupHelpBehavior. Properties – Enables you to pass initial values for the properties of the client behavior. For example, the initial value of the HelpText property. This is how server property values are passed to the client. Events – Enables you to pass client-side event handlers to the client behavior. References – Enables you to pass references to other client components. Element – The DOM element associated with the client behavior. This will be the DOM element associated with the control being extended such as the txtSSN TextBox. The $create() method is generated for you automatically. You just need to focus on writing the server-side control extender class. Specifying the Target Control All Ajax Control Toolkit extenders inherit a TargetControlID property from the ExtenderControlBase class. This property, the TargetControlID property, points at the control that the extender control extends. For example, the Ajax Control Toolkit TextBoxWatermark control extends a TextBox, the ConfirmButton control extends a Button, and the Calendar control extends a TextBox. You must indicate the type of control which your extender is extending. You indicate the type of control by adding a [TargetControlType] attribute to your control. For example, the PopupHelp extender is declared like this: [TargetControlType(typeof(TextBox))] public class PopupHelpExtender: ExtenderControlBase { } The PopupHelp extender can be used to extend a TextBox control. If you try to use the PopupHelp extender with another type of control then an exception is thrown. If you want to create an extender control which can be used with any type of ASP.NET control (Button, DataView, TextBox or whatever) then use the following attribute: [TargetControlType(typeof(Control))] Decorating Properties with Attributes If you decorate a server-side property with the [ExtenderControlProperty] attribute then the value of the property gets passed to the control’s client-side behavior. The value of the property gets passed to the client through the $create() method discussed above. The PopupHelp control contains the following HelpText property: [ExtenderControlProperty] [RequiredProperty] public string HelpText { get { return GetPropertyValue("HelpText", "Help Text"); } set { SetPropertyValue("HelpText", value); } } The HelpText property determines the help text which pops up when you start typing into a TextBox control. Because the HelpText property is decorated with the [ExtenderControlProperty] attribute, any value assigned to this property on the server is passed to the client automatically. For example, if you declare the PopupHelp extender in a Web Form page like this: <asp:TextBox ID="txtSSN" runat="server" /> <act:PopupHelpExtender id="ph1" TargetControlID="txtSSN" HelpText="Please enter your social security number." runat="server" />   Then the PopupHelpExtender renders the call to the the following Microsoft Ajax Library $create() method: $create(MyACTControls.PopupHelpBehavior, {"HelpText":"Please enter your social security number.","id":"ph1"}, null, null, $get("txtSSN")); You can see this call to the JavaScript $create() method by selecting View Source in your browser. This call to the $create() method calls a method named set_HelpText() automatically and passes the value “Please enter your social security number”. There are several attributes which you can use to decorate server-side properties including: ExtenderControlProperty – When a property is marked with this attribute, the value of the property is passed to the client automatically. ExtenderControlEvent – When a property is marked with this attribute, the property represents a client event handler. Required – When a value is not assigned to this property on the server, an error is displayed. DefaultValue – The default value of the property passed to the client. ClientPropertyName – The name of the corresponding property in the JavaScript behavior. For example, the server-side property is named ID (uppercase) and the client-side property is named id (lower-case). IDReferenceProperty – Applied to properties which refer to the IDs of other controls. URLProperty – Calls ResolveClientURL() to convert from a server-side URL to a URL which can be used on the client. ElementReference – Returns a reference to a DOM element by performing a client $get(). The WebResource, ClientResource, and the RequiredScript Attributes The PopupHelp extender uses three embedded resources named PopupHelpBehavior.js, PopupHelpBehavior.debug.js, and PopupHelpBehavior.css. The first two files are JavaScript files and the final file is a Cascading Style sheet file. These files are compiled as embedded resources. You don’t need to mark them as embedded resources in your Visual Studio solution because they get added to the assembly when the assembly is compiled by a build task. You can see that these files get embedded into the MyACTControls assembly by using Red Gate’s .NET Reflector tool: In order to use these files with the PopupHelp extender, you need to work with both the WebResource and the ClientScriptResource attributes. The PopupHelp extender includes the following three WebResource attributes. [assembly: WebResource("PopupHelp.PopupHelpBehavior.js", "text/javascript")] [assembly: WebResource("PopupHelp.PopupHelpBehavior.debug.js", "text/javascript")] [assembly: WebResource("PopupHelp.PopupHelpBehavior.css", "text/css", PerformSubstitution = true)] These WebResource attributes expose the embedded resource from the assembly so that they can be accessed by using the ScriptResource.axd or WebResource.axd handlers. The first parameter passed to the WebResource attribute is the name of the embedded resource and the second parameter is the content type of the embedded resource. The PopupHelp extender also includes the following ClientScriptResource and ClientCssResource attributes: [ClientScriptResource("MyACTControls.PopupHelpBehavior", "PopupHelp.PopupHelpBehavior.js")] [ClientCssResource("PopupHelp.PopupHelpBehavior.css")] Including these attributes causes the PopupHelp extender to request these resources when you add the PopupHelp extender to a page. If you open View Source in a browser which uses the PopupHelp extender then you will see the following link for the Cascading Style Sheet file: <link href="/WebResource.axd?d=0uONMsWXUuEDG-pbJHAC1kuKiIMteQFkYLmZdkgv7X54TObqYoqVzU4mxvaa4zpn5H9ch0RDwRYKwtO8zM5mKgO6C4WbrbkWWidKR07LD1d4n4i_uNB1mHEvXdZu2Ae5mDdVNDV53znnBojzCzwvSw2&amp;t=634417392021676003" type="text/css" rel="stylesheet" /> You also will see the following script include for the JavaScript file: <script src="/ScriptResource.axd?d=pIS7xcGaqvNLFBvExMBQSp_0xR3mpDfS0QVmmyu1aqDUjF06TrW1jVDyXNDMtBHxpRggLYDvgFTWOsrszflZEDqAcQCg-hDXjun7ON0Ol7EXPQIdOe1GLMceIDv3OeX658-tTq2LGdwXhC1-dE7_6g2&amp;t=ffffffff88a33b59" type="text/javascript"></script> The JavaScrpt file returned by this request to ScriptResource.axd contains the combined scripts for any and all Ajax Control Toolkit controls in a page. By default, the Ajax Control Toolkit combines all of the JavaScript files required by a page into a single JavaScript file. Combining files in this way really speeds up how quickly all of the JavaScript files get delivered from the web server to the browser. So, by default, there will be only one ScriptResource.axd include for all of the JavaScript files required by a page. If you want to disable Script Combining, and create separate links, then disable Script Combining like this: <act:ToolkitScriptManager ID="tsm" runat="server" CombineScripts="false" /> There is one more important attribute used by Ajax Control Toolkit extenders. The PopupHelp behavior uses the following two RequirdScript attributes to load the JavaScript files which are required by the PopupHelp behavior: [RequiredScript(typeof(CommonToolkitScripts), 0)] [RequiredScript(typeof(PopupExtender), 1)] The first parameter of the RequiredScript attribute represents either the string name of a JavaScript file or the type of an Ajax Control Toolkit control. The second parameter represents the order in which the JavaScript files are loaded (This second parameter is needed because .NET attributes are intrinsically unordered). In this case, the RequiredScript attribute will load the JavaScript files associated with the CommonToolkitScripts type and the JavaScript files associated with the PopupExtender in that order. The PopupHelp behavior depends on these JavaScript files. Writing the Client-Side Code The PopupHelp extender uses a client-side behavior written with the Microsoft Ajax Library. Here is the complete code for the client-side behavior: (function () { // The unique name of the script registered with the // client script loader var scriptName = "PopupHelpBehavior"; function execute() { Type.registerNamespace('MyACTControls'); MyACTControls.PopupHelpBehavior = function (element) { /// <summary> /// A behavior which displays popup help for a textbox /// </summmary> /// <param name="element" type="Sys.UI.DomElement">The element to attach to</param> MyACTControls.PopupHelpBehavior.initializeBase(this, [element]); this._textbox = Sys.Extended.UI.TextBoxWrapper.get_Wrapper(element); this._cssClass = "ajax__popupHelp"; this._popupBehavior = null; this._popupPosition = Sys.Extended.UI.PositioningMode.BottomLeft; this._popupDiv = null; this._helpText = "Help Text"; this._element$delegates = { focus: Function.createDelegate(this, this._element_onfocus), blur: Function.createDelegate(this, this._element_onblur) }; } MyACTControls.PopupHelpBehavior.prototype = { initialize: function () { MyACTControls.PopupHelpBehavior.callBaseMethod(this, 'initialize'); // Add event handlers for focus and blur var element = this.get_element(); $addHandlers(element, this._element$delegates); }, _ensurePopup: function () { if (!this._popupDiv) { var element = this.get_element(); var id = this.get_id(); this._popupDiv = $common.createElementFromTemplate({ nodeName: "div", properties: { id: id + "_popupDiv" }, cssClasses: ["ajax__popupHelp"] }, element.parentNode); this._popupBehavior = new $create(Sys.Extended.UI.PopupBehavior, { parentElement: element }, {}, {}, this._popupDiv); this._popupBehavior.set_positioningMode(this._popupPosition); } }, get_HelpText: function () { return this._helpText; }, set_HelpText: function (value) { if (this._HelpText != value) { this._helpText = value; this._ensurePopup(); this._popupDiv.innerHTML = value; this.raisePropertyChanged("Text") } }, _element_onfocus: function (e) { this.show(); }, _element_onblur: function (e) { this.hide(); }, show: function () { this._popupBehavior.show(); }, hide: function () { if (this._popupBehavior) { this._popupBehavior.hide(); } }, dispose: function() { var element = this.get_element(); $clearHandlers(element); if (this._popupBehavior) { this._popupBehavior.dispose(); this._popupBehavior = null; } } }; MyACTControls.PopupHelpBehavior.registerClass('MyACTControls.PopupHelpBehavior', Sys.Extended.UI.BehaviorBase); Sys.registerComponent(MyACTControls.PopupHelpBehavior, { name: "popupHelp" }); } // execute if (window.Sys && Sys.loader) { Sys.loader.registerScript(scriptName, ["ExtendedBase", "ExtendedCommon"], execute); } else { execute(); } })();   In the following sections, we’ll discuss how this client-side behavior works. Wrapping the Behavior for the Script Loader The behavior is wrapped with the following script: (function () { // The unique name of the script registered with the // client script loader var scriptName = "PopupHelpBehavior"; function execute() { // Behavior Content } // execute if (window.Sys && Sys.loader) { Sys.loader.registerScript(scriptName, ["ExtendedBase", "ExtendedCommon"], execute); } else { execute(); } })(); This code is required by the Microsoft Ajax Library Script Loader. You need this code if you plan to use a behavior directly from client-side code and you want to use the Script Loader. If you plan to only use your code in the context of the Ajax Control Toolkit then you can leave out this code. Registering a JavaScript Namespace The PopupHelp behavior is declared within a namespace named MyACTControls. In the code above, this namespace is created with the following registerNamespace() method: Type.registerNamespace('MyACTControls'); JavaScript does not have any built-in way of creating namespaces to prevent naming conflicts. The Microsoft Ajax Library extends JavaScript with support for namespaces. You can learn more about the registerNamespace() method here: http://msdn.microsoft.com/en-us/library/bb397723.aspx Creating the Behavior The actual Popup behavior is created with the following code. MyACTControls.PopupHelpBehavior = function (element) { /// <summary> /// A behavior which displays popup help for a textbox /// </summmary> /// <param name="element" type="Sys.UI.DomElement">The element to attach to</param> MyACTControls.PopupHelpBehavior.initializeBase(this, [element]); this._textbox = Sys.Extended.UI.TextBoxWrapper.get_Wrapper(element); this._cssClass = "ajax__popupHelp"; this._popupBehavior = null; this._popupPosition = Sys.Extended.UI.PositioningMode.BottomLeft; this._popupDiv = null; this._helpText = "Help Text"; this._element$delegates = { focus: Function.createDelegate(this, this._element_onfocus), blur: Function.createDelegate(this, this._element_onblur) }; } MyACTControls.PopupHelpBehavior.prototype = { initialize: function () { MyACTControls.PopupHelpBehavior.callBaseMethod(this, 'initialize'); // Add event handlers for focus and blur var element = this.get_element(); $addHandlers(element, this._element$delegates); }, _ensurePopup: function () { if (!this._popupDiv) { var element = this.get_element(); var id = this.get_id(); this._popupDiv = $common.createElementFromTemplate({ nodeName: "div", properties: { id: id + "_popupDiv" }, cssClasses: ["ajax__popupHelp"] }, element.parentNode); this._popupBehavior = new $create(Sys.Extended.UI.PopupBehavior, { parentElement: element }, {}, {}, this._popupDiv); this._popupBehavior.set_positioningMode(this._popupPosition); } }, get_HelpText: function () { return this._helpText; }, set_HelpText: function (value) { if (this._HelpText != value) { this._helpText = value; this._ensurePopup(); this._popupDiv.innerHTML = value; this.raisePropertyChanged("Text") } }, _element_onfocus: function (e) { this.show(); }, _element_onblur: function (e) { this.hide(); }, show: function () { this._popupBehavior.show(); }, hide: function () { if (this._popupBehavior) { this._popupBehavior.hide(); } }, dispose: function() { var element = this.get_element(); $clearHandlers(element); if (this._popupBehavior) { this._popupBehavior.dispose(); this._popupBehavior = null; } } }; The code above has two parts. The first part of the code is used to define the constructor function for the PopupHelp behavior. This is a factory method which returns an instance of a PopupHelp behavior: MyACTControls.PopupHelpBehavior = function (element) { } The second part of the code modified the prototype for the PopupHelp behavior: MyACTControls.PopupHelpBehavior.prototype = { } Any code which is particular to a single instance of the PopupHelp behavior should be placed in the constructor function. For example, the default value of the _helpText field is assigned in the constructor function: this._helpText = "Help Text"; Any code which is shared among all instances of the PopupHelp behavior should be added to the PopupHelp behavior’s prototype. For example, the public HelpText property is added to the prototype: get_HelpText: function () { return this._helpText; }, set_HelpText: function (value) { if (this._HelpText != value) { this._helpText = value; this._ensurePopup(); this._popupDiv.innerHTML = value; this.raisePropertyChanged("Text") } }, Registering a JavaScript Class After you create the PopupHelp behavior, you must register the behavior as a class by using the Microsoft Ajax registerClass() method like this: MyACTControls.PopupHelpBehavior.registerClass('MyACTControls.PopupHelpBehavior', Sys.Extended.UI.BehaviorBase); This call to registerClass() registers PopupHelp behavior as a class which derives from the base Sys.Extended.UI.BehaviorBase class. Like the ExtenderControlBase class on the server side, the BehaviorBase class on the client side contains method used by every behavior. The documentation for the BehaviorBase class can be found here: http://msdn.microsoft.com/en-us/library/bb311020.aspx The most important methods and properties of the BehaviorBase class are the following: dispose() – Use this method to clean up all resources used by your behavior. In the case of the PopupHelp behavior, the dispose() method is used to remote the event handlers created by the behavior and disposed the Popup behavior. get_element() -- Use this property to get the DOM element associated with the behavior. In other words, the DOM element which the behavior extends. get_id() – Use this property to the ID of the current behavior. initialize() – Use this method to initialize the behavior. This method is called after all of the properties are set by the $create() method. Creating Debug and Release Scripts You might have noticed that the PopupHelp behavior uses two scripts named PopupHelpBehavior.js and PopupHelpBehavior.debug.js. However, you never create these two scripts. Instead, you only create a single script named PopupHelpBehavior.pre.js. The pre in PopupHelpBehavior.pre.js stands for preprocessor. When you build the Ajax Control Toolkit (or the sample Visual Studio Solution at the end of this blog entry), a build task named JSBuild generates the PopupHelpBehavior.js release script and PopupHelpBehavior.debug.js debug script automatically. The JSBuild preprocessor supports the following directives: #IF #ELSE #ENDIF #INCLUDE #LOCALIZE #DEFINE #UNDEFINE The preprocessor directives are used to mark code which should only appear in the debug version of the script. The directives are used extensively in the Microsoft Ajax Library. For example, the Microsoft Ajax Library Array.contains() method is created like this: $type.contains = function Array$contains(array, item) { //#if DEBUG var e = Function._validateParams(arguments, [ {name: "array", type: Array, elementMayBeNull: true}, {name: "item", mayBeNull: true} ]); if (e) throw e; //#endif return (indexOf(array, item) >= 0); } Notice that you add each of the preprocessor directives inside a JavaScript comment. The comment prevents Visual Studio from getting confused with its Intellisense. The release version, but not the debug version, of the PopupHelpBehavior script is also minified automatically by the Microsoft Ajax Minifier. The minifier is invoked by a build step in the project file. Conclusion The goal of this blog entry was to explain how you can create custom AJAX Control Toolkit controls. In the first part of this blog entry, you learned how to create the server-side portion of an Ajax Control Toolkit control. You learned how to derive a new control from the ExtenderControlBase class and decorate its properties with the necessary attributes. Next, in the second part of this blog entry, you learned how to create the client-side portion of an Ajax Control Toolkit control by creating a client-side behavior with JavaScript. You learned how to use the methods of the Microsoft Ajax Library to extend your client behavior from the BehaviorBase class. Download the Custom ACT Starter Solution

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Setting up a local AI server - easy with Solaris 11

    - by Stefan Hinker
    Many things are new in Solaris 11, Autoinstall is one of them.  If, like me, you've known Jumpstart for the last 2 centuries or so, you'll have to start from scratch.  Well, almost, as the concepts are similar, and it's not all that difficult.  Just new. I wanted to have an AI server that I could use for demo purposes, on the train if need be.  That answers the question of hardware requirements: portable.  But let's start at the beginning. First, you need an OS image, of course.  In the new world of Solaris 11, it is now called a repository.  The original can be downloaded from the Solaris 11 page at Oracle.   What you want is the "Oracle Solaris 11 11/11 Repository Image", which comes in two parts that can be combined using cat.  MD5 checksums for these (and all other downloads from that page) are available closer to the top of the page. With that, building the repository is quick and simple: # zfs create -o mountpoint=/export/repo rpool/ai/repo # zfs create rpool/ai/repo/s11 # mount -o ro -F hsfs /tmp/sol-11-1111-repo-full.iso /mnt # rsync -aP /mnt/repo /export/repo/s11 # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@fcs # pkgrepo info -s /export/repo/sol11/repo PUBLISHER PACKAGES STATUS UPDATED solaris 4292 online 2012-03-12T20:47:15.378639Z That's all there's to it.  Let's make a snapshot, just to be on the safe side.  You never know when one will come in handy.  To use this repository, you could just add it as a file-based publisher: # pkg set-publisher -g file:///export/repo/sol11/repo solaris In case I'd want to access this repository through a (virtual) network, i'll now quickly activate the repository-service: # svccfg -s application/pkg/server \ setprop pkg/inst_root=/export/repo/sol11/repo # svccfg -s application/pkg/server setprop pkg/readonly=true # svcadm refresh application/pkg/server # svcadm enable application/pkg/server That's all you need - now point your browser to http://localhost/ to view your beautiful repository-server. Step 1 is done.  All of this, by the way, is nicely documented in the README file that's contained in the repository image. Of course, we already have updates to the original release.  You can find them in MOS in the Oracle Solaris 11 Support Repository Updates (SRU) Index.  You can simply add these to your existing repository or create separate repositories for each SRU.  The individual SRUs are self-sufficient and incremental - SRU4 includes all updates from SRU2 and SRU3.  With ZFS, you can also get both: A full repository with all updates and at the same time incremental ones up to each of the updates: # mount -o ro -F hsfs /tmp/sol-11-1111-sru4-05-incr-repo.iso /mnt # pkgrecv -s /mnt/repo -d /export/repo/sol11/repo '*' # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@sru4 # zfs set snapdir=visible rpool/ai/repo/sol11 # svcadm restart svc:/application/pkg/server:default The normal repository is now updated to SRU4.  Thanks to the ZFS snapshots, there is also a valid repository of Solaris 11 11/11 without the update located at /export/repo/sol11/.zfs/snapshot/fcs . If you like, you can also create another repository service for each update, running on a separate port. But now lets continue with the AI server.  Just a little bit of reading in the dokumentation makes it clear that we will need to run a DHCP server for this.  Since I already have one active (for my SunRay installation) and since it's a good idea to have these kinds of services separate anyway, I decided to create this in a Zone.  So, let's create one first: # zfs create -o mountpoint=/export/install rpool/ai/install # zfs create -o mountpoint=/zones rpool/zones # zonecfg -z ai-server zonecfg:ai-server> create create: Using system default template 'SYSdefault' zonecfg:ai-server> set zonepath=/zones/ai-server zonecfg:ai-server> add dataset zonecfg:ai-server:dataset> set name=rpool/ai/install zonecfg:ai-server:dataset> set alias=install zonecfg:ai-server:dataset> end zonecfg:ai-server> commit zonecfg:ai-server> exit # zoneadm -z ai-server install # zoneadm -z ai-server boot ; zlogin -C ai-server Give it a hostname and IP address at first boot, and there's the Zone.  For a publisher for Solaris packages, it will be bound to the "System Publisher" from the Global Zone.  The /export/install filesystem, of course, is intended to be used by the AI server.  Let's configure it now: #zlogin ai-server root@ai-server:~# pkg install install/installadm root@ai-server:~# installadm create-service -n x86-fcs -a i386 \ -s pkg://solaris/install-image/[email protected],5.11-0.175.0.0.0.2.1482 \ -d /export/install/fcs -i 192.168.2.20 -c 3 With that, the core AI server is already done.  What happened here?  First, I installed the AI server software.  IPS makes that nice and easy.  If necessary, it'll also pull in the required DHCP-Server and anything else that might be missing.  Watch out for that DHCP server software.  In Solaris 11, there are two different versions.  There's the one you might know from Solaris 10 and earlier, and then there's a new one from ISC.  The latter is the one we need for AI.  The SMF service names of both are very similar.  The "old" one is "svc:/network/dhcp-server:default". The ISC-server comes with several SMF-services. We at least need "svc:/network/dhcp/server:ipv4".  The command "installadm create-service" creates the installation-service. It's called "x86-fcs", serves the "i386" architecture and gets its boot image from the repository of the system publisher, using version 5.11,5.11-0.175.0.0.0.2.1482, which is Solaris 11 11/11.  (The option "-a i386" in this example is optional, since the installserver itself runs on a x86 machine.) The boot-environment for clients is created in /export/install/fcs and the DHCP-server is configured for 3 IP-addresses starting at 192.168.2.20.  This configuration is stored in a very human readable form in /etc/inet/dhcpd4.conf.  An AI-service for SPARC systems could be created in the very same way, using "-a sparc" as the architecture option. Now we would be ready to register and install the first client.  It would be installed with the default "solaris-large-server" using the publisher "http://pkg.oracle.com/solaris/release" and would query it's configuration interactively at first boot.  This makes it very clear that an AI-server is really only a boot-server.  The true source of packets to install can be different.  Since I don't like these defaults for my demo setup, I did some extra config work for my clients. The configuration of a client is controlled by manifests and profiles.  The manifest controls which packets are installed and how the filesystems are layed out.  In that, it's very much like the old "rules.ok" file in Jumpstart.  Profiles contain additional configuration like root passwords, primary user account, IP addresses, keyboard layout etc.  Hence, profiles are very similar to the old sysid.cfg file. The easiest way to get your hands on a manifest is to ask the AI server we just created to give us it's default one.  Then modify that to our liking and give it back to the installserver to use: root@ai-server:~# mkdir -p /export/install/configs/manifests root@ai-server:~# cd /export/install/configs/manifests root@ai-server:~# installadm export -n x86-fcs -m orig_default \ -o orig_default.xml root@ai-server:~# cp orig_default.xml s11-fcs.small.local.xml root@ai-server:~# vi s11-fcs.small.local.xml root@ai-server:~# more s11-fcs.small.local.xml <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="S11 Small fcs local"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="IPS"> <destination>  </destination> <source> <publisher name="solaris"> <origin name="http://192.168.2.12/"/> </publisher> </source> <!-- By default the latest build available, in the specified IPS repository, is installed. If another build is required, the build number has to be appended to the 'entire' package in the following form: <name>pkg:/[email protected]#</name> --> <software_data action="install"> <name>pkg:/[email protected],5.11-0.175.0.0.0.2.0</name> <name>pkg:/group/system/solaris-small-server</name> </software_data> </software> </ai_instance> </auto_install> root@ai-server:~# installadm create-manifest -n x86-fcs -d \ -f ./s11-fcs.small.local.xml root@ai-server:~# installadm list -m -n x86-fcs Manifest Status Criteria -------- ------ -------- S11 Small fcs local Default None orig_default Inactive None The major points in this new manifest are: Install "solaris-small-server" Install a few locales less than the default.  I'm not that fluid in French or Japanese... Use my own package service as publisher, running on IP address 192.168.2.12 Install the initial release of Solaris 11:  pkg:/[email protected],5.11-0.175.0.0.0.2.0 Using a similar approach, I'll create a default profile interactively and use it as a template for a few customized building blocks, each defining a part of the overall system configuration.  The modular approach makes it easy to configure numerous clients later on: root@ai-server:~# mkdir -p /export/install/configs/profiles root@ai-server:~# cd /export/install/configs/profiles root@ai-server:~# sysconfig create-profile -o default.xml root@ai-server:~# cp default.xml general.xml; cp default.xml mars.xml root@ai-server:~# cp default.xml user.xml root@ai-server:~# vi general.xml mars.xml user.xml root@ai-server:~# more general.xml mars.xml user.xml :::::::::::::: general.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/timezone"> <instance enabled="true" name="default"> <property_group type="application" name="timezone"> <propval type="astring" name="localtime" value="Europe/Berlin"/> </property_group> </instance> </service> <service version="1" type="service" name="system/environment"> <instance enabled="true" name="init"> <property_group type="application" name="environment"> <propval type="astring" name="LANG" value="C"/> </property_group> </instance> </service> <service version="1" type="service" name="system/keymap"> <instance enabled="true" name="default"> <property_group type="system" name="keymap"> <propval type="astring" name="layout" value="US-English"/> </property_group> </instance> </service> <service version="1" type="service" name="system/console-login"> <instance enabled="true" name="default"> <property_group type="application" name="ttymon"> <propval type="astring" name="terminal_type" value="vt100"/> </property_group> </instance> </service> <service version="1" type="service" name="network/physical"> <instance enabled="true" name="default"> <property_group type="application" name="netcfg"> <propval type="astring" name="active_ncp" value="DefaultFixed"/> </property_group> </instance> </service> <service version="1" type="service" name="system/name-service/switch"> <property_group type="application" name="config"> <propval type="astring" name="default" value="files"/> <propval type="astring" name="host" value="files dns"/> <propval type="astring" name="printer" value="user files"/> </property_group> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="system/name-service/cache"> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="network/dns/client"> <property_group type="application" name="config"> <property type="net_address" name="nameserver"> <net_address_list> <value_node value="192.168.2.1"/> </net_address_list> </property> </property_group> <instance enabled="true" name="default"/> </service> </service_bundle> :::::::::::::: mars.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="network/install"> <instance enabled="true" name="default"> <property_group type="application" name="install_ipv4_interface"> <propval type="astring" name="address_type" value="static"/> <propval type="net_address_v4" name="static_address" value="192.168.2.100/24"/> <propval type="astring" name="name" value="net0/v4"/> <propval type="net_address_v4" name="default_route" value="192.168.2.1"/> </property_group> <property_group type="application" name="install_ipv6_interface"> <propval type="astring" name="stateful" value="yes"/> <propval type="astring" name="stateless" value="yes"/> <propval type="astring" name="address_type" value="addrconf"/> <propval type="astring" name="name" value="net0/v6"/> </property_group> </instance> </service> <service version="1" type="service" name="system/identity"> <instance enabled="true" name="node"> <property_group type="application" name="config"> <propval type="astring" name="nodename" value="mars"/> </property_group> </instance> </service> </service_bundle> :::::::::::::: user.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/config-user"> <instance enabled="true" name="default"> <property_group type="application" name="root_account"> <propval type="astring" name="login" value="root"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="role"/> </property_group> <property_group type="application" name="user_account"> <propval type="astring" name="login" value="stefan"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="normal"/> <propval type="astring" name="description" value="Stefan Hinker"/> <propval type="count" name="uid" value="12345"/> <propval type="count" name="gid" value="10"/> <propval type="astring" name="shell" value="/usr/bin/bash"/> <propval type="astring" name="roles" value="root"/> <propval type="astring" name="profiles" value="System Administrator"/> <propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/> </property_group> </instance> </service> </service_bundle> root@ai-server:~# installadm create-profile -n x86-fcs -f general.xml root@ai-server:~# installadm create-profile -n x86-fcs -f user.xml root@ai-server:~# installadm create-profile -n x86-fcs -f mars.xml \ -c ipv4=192.168.2.100 root@ai-server:~# installadm list -p Service Name Profile ------------ ------- x86-fcs general.xml mars.xml user.xml root@ai-server:~# installadm list -n x86-fcs -p Profile Criteria ------- -------- general.xml None mars.xml ipv4 = 192.168.2.100 user.xml None Here's the idea behind these files: "general.xml" contains settings valid for all my clients.  Stuff like DNS servers, for example, which in my case will always be the same. "user.xml" only contains user definitions.  That is, a root password and a primary user.Both of these profiles will be valid for all clients (for now). "mars.xml" defines network settings for an individual client.  This profile is associated with an IP-Address.  For this to work, I'll have to tweak the DHCP-settings in the next step: root@ai-server:~# installadm create-client -e 08:00:27:AA:3D:B1 -n x86-fcs root@ai-server:~# vi /etc/inet/dhcpd4.conf root@ai-server:~# tail -5 /etc/inet/dhcpd4.conf host 080027AA3DB1 { hardware ethernet 08:00:27:AA:3D:B1; fixed-address 192.168.2.100; filename "01080027AA3DB1"; } This completes the client preparations.  I manually added the IP-Address for mars to /etc/inet/dhcpd4.conf.  This is needed for the "mars.xml" profile.  Disabling arbitrary DHCP-replies will shut up this DHCP server, making my life in a shared environment a lot more peaceful ;-)Now, I of course want this installation to be completely hands-off.  For this to work, I'll need to modify the grub boot menu for this client slightly.  You can find it in /etc/netboot.  "installadm create-client" will create a new boot menu for every client, identified by the client's MAC address.  The template for this can be found in a subdirectory with the name of the install service, /etc/netboot/x86-fcs in our case.  If you don't want to change this manually for every client, modify that template to your liking instead. root@ai-server:~# cd /etc/netboot root@ai-server:~# cp menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org root@ai-server:~# vi menu.lst.01080027AA3DB1 root@ai-server:~# diff menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org 1,2c1,2 < default=1 < timeout=10 --- > default=0 > timeout=30 root@ai-server:~# more menu.lst.01080027AA3DB1 default=1 timeout=10 min_mem64=0 title Oracle Solaris 11 11/11 Text Installer and command line kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install_media=htt p://$serverIP:5555//export/install/fcs,install_service=x86-fcs,install_svc_addre ss=$serverIP:5555 module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive title Oracle Solaris 11 11/11 Automated Install kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install=true,inst all_media=http://$serverIP:5555//export/install/fcs,install_service=x86-fcs,inst all_svc_address=$serverIP:5555,livemode=text module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive Now just boot the client off the network using PXE-boot.  For my demo purposes, that's a client from VirtualBox, of course.  That's all there's to it.  And despite the fact that this blog entry is a little longer - that wasn't that hard now, was it?

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • help with fixing fwts errors log

    - by jasmines
    Here is an extract of results.log: MTRR validation. Test 1 of 3: Validate the kernel MTRR IOMEM setup. FAILED [MEDIUM] MTRRIncorrectAttr: Test 1, Memory range 0xc0000000 to 0xdfffffff (PCI Bus 0000:00) has incorrect attribute Write-Combining. FAILED [MEDIUM] MTRRIncorrectAttr: Test 1, Memory range 0xfee01000 to 0xffffffff (PCI Bus 0000:00) has incorrect attribute Write-Protect. ==================================================================================================== Test 1 of 1: Kernel log error check. Kernel message: [ 0.208079] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored ADVICE: This is not exactly a failure mode but a warning from the kernel. The _OSI() method has implemented a match to the 'Linux' query in the DSDT and this is redundant because the ACPI driver matches onto the Windows _OSI strings by default. FAILED [HIGH] KlogACPIErrorMethodExecutionParse: Test 1, HIGH Kernel message: [ 3.512783] ACPI Error : Method parse/execution failed [\_SB_.PCI0.GFX0._DOD] (Node f7425858), AE_AML_PACKAGE_LIMIT (20110623/psparse-536) ADVICE: This is a bug picked up by the kernel, but as yet, the firmware test suite has no diagnostic advice for this particular problem. Found 1 unique errors in kernel log. ==================================================================================================== Check if system is using latest microcode. ---------------------------------------------------------------------------------------------------- Cannot read microcode file /usr/share/misc/intel-microcode.dat. Aborted test, initialisation failed. ==================================================================================================== MSR register tests. FAILED [MEDIUM] MSRCPUsInconsistent: Test 1, MSR SYSENTER_ESP (0x175) has 1 inconsistent values across 2 CPUs for (shift: 0 mask: 0xffffffffffffffff). MSR CPU 0 -> 0xf7bb9c40 vs CPU 1 -> 0xf7bc7c40 FAILED [MEDIUM] MSRCPUsInconsistent: Test 1, MSR MISC_ENABLE (0x1a0) has 1 inconsistent values across 2 CPUs for (shift: 0 mask: 0x400c51889). MSR CPU 0 -> 0x850088 vs CPU 1 -> 0x850089 ==================================================================================================== Checks firmware has set PCI Express MaxReadReq to a higher value on non-motherboard devices. ---------------------------------------------------------------------------------------------------- Test 1 of 1: Check firmware settings MaxReadReq for PCI Express devices. MaxReadReq for pci://00:00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) is low (128) [Audio device]. MaxReadReq for pci://00:02:00.0 Network controller: Intel Corporation PRO/Wireless 5100 AGN [Shiloh] Network Connection is low (128) [Network controller]. FAILED [LOW] LowMaxReadReq: Test 1, 2 devices have low MaxReadReq settings. Firmware may have configured these too low. ADVICE: The MaxReadRequest size is set too low and will affect performance. It will provide excellent bus sharing at the cost of bus data transfer rates. Although not a critical issue, it may be worth considering setting the MaxReadRequest size to 256 or 512 to increase throughput on the PCI Express bus. Some drivers (for example the Brocade Fibre Channel driver) allow one to override the firmware settings. Where possible, this BIOS configuration setting is worth increasing it a little more for better performance at a small reduction of bus sharing. ==================================================================================================== PCIe ASPM check. ---------------------------------------------------------------------------------------------------- Test 1 of 2: PCIe ASPM ACPI test. PCIE ASPM is not controlled by Linux kernel. ADVICE: BIOS reports that Linux kernel should not modify ASPM settings that BIOS configured. It can be intentional because hardware vendors identified some capability bugs between the motherboard and the add-on cards. Test 2 of 2: PCIe ASPM registers test. WARNING: Test 2, RP 00h:1Ch.01h L0s not enabled. WARNING: Test 2, RP 00h:1Ch.01h L1 not enabled. WARNING: Test 2, Device 02h:00h.00h L0s not enabled. WARNING: Test 2, Device 02h:00h.00h L1 not enabled. PASSED: Test 2, PCIE aspm setting matched was matched. WARNING: Test 2, RP 00h:1Ch.05h L0s not enabled. WARNING: Test 2, RP 00h:1Ch.05h L1 not enabled. WARNING: Test 2, Device 85h:00h.00h L0s not enabled. WARNING: Test 2, Device 85h:00h.00h L1 not enabled. PASSED: Test 2, PCIE aspm setting matched was matched. ==================================================================================================== Extract and analyse Windows Management Instrumentation (WMI). Test 1 of 2: Check Windows Management Instrumentation in DSDT Found WMI Method WMAA with GUID: 5FB7F034-2C63-45E9-BE91-3D44E2C707E4, Instance 0x01 Found WMI Event, Notifier ID: 0x80, GUID: 95F24279-4D7B-4334-9387-ACCDC67EF61C, Instance 0x01 PASSED: Test 1, GUID 95F24279-4D7B-4334-9387-ACCDC67EF61C is handled by driver hp-wmi (Vendor: HP). Found WMI Event, Notifier ID: 0xa0, GUID: 2B814318-4BE8-4707-9D84-A190A859B5D0, Instance 0x01 FAILED [MEDIUM] WMIUnknownGUID: Test 1, GUID 2B814318-4BE8-4707-9D84-A190A859B5D0 is unknown to the kernel, a driver may need to be implemented for this GUID. ADVICE: A WMI driver probably needs to be written for this event. It can checked for using: wmi_has_guid("2B814318-4BE8-4707-9D84-A190A859B5D0"). One can install a notify handler using wmi_install_notify_handler("2B814318-4BE8-4707-9D84-A190A859B5D0", handler, NULL). http://lwn.net/Articles/391230 describes how to write an appropriate driver. Found WMI Object, Object ID AB, GUID: 05901221-D566-11D1-B2F0-00A0C9062910, Instance 0x01, Flags: 00 Found WMI Method WMBA with GUID: 1F4C91EB-DC5C-460B-951D-C7CB9B4B8D5E, Instance 0x01 Found WMI Object, Object ID BC, GUID: 2D114B49-2DFB-4130-B8FE-4A3C09E75133, Instance 0x7f, Flags: 00 Found WMI Object, Object ID BD, GUID: 988D08E3-68F4-4C35-AF3E-6A1B8106F83C, Instance 0x19, Flags: 00 Found WMI Object, Object ID BE, GUID: 14EA9746-CE1F-4098-A0E0-7045CB4DA745, Instance 0x01, Flags: 00 Found WMI Object, Object ID BF, GUID: 322F2028-0F84-4901-988E-015176049E2D, Instance 0x01, Flags: 00 Found WMI Object, Object ID BG, GUID: 8232DE3D-663D-4327-A8F4-E293ADB9BF05, Instance 0x01, Flags: 00 Found WMI Object, Object ID BH, GUID: 8F1F6436-9F42-42C8-BADC-0E9424F20C9A, Instance 0x00, Flags: 00 Found WMI Object, Object ID BI, GUID: 8F1F6435-9F42-42C8-BADC-0E9424F20C9A, Instance 0x00, Flags: 00 Found WMI Method WMAC with GUID: 7391A661-223A-47DB-A77A-7BE84C60822D, Instance 0x01 Found WMI Object, Object ID BJ, GUID: DF4E63B6-3BBC-4858-9737-C74F82F821F3, Instance 0x05, Flags: 00 ==================================================================================================== Disassemble DSDT to check for _OSI("Linux"). ---------------------------------------------------------------------------------------------------- Test 1 of 1: Disassemble DSDT to check for _OSI("Linux"). This is not strictly a failure mode, it just alerts one that this has been defined in the DSDT and probably should be avoided since the Linux ACPI driver matches onto the Windows _OSI strings { If (_OSI ("Linux")) { Store (0x03E8, OSYS) } If (_OSI ("Windows 2001")) { Store (0x07D1, OSYS) } If (_OSI ("Windows 2001 SP1")) { Store (0x07D1, OSYS) } If (_OSI ("Windows 2001 SP2")) { Store (0x07D2, OSYS) } If (_OSI ("Windows 2006")) { Store (0x07D6, OSYS) } If (LAnd (MPEN, LEqual (OSYS, 0x07D1))) { TRAP (0x01, 0x48) } TRAP (0x03, 0x35) } WARNING: Test 1, DSDT implements a deprecated _OSI("Linux") test. ==================================================================================================== 0 passed, 0 failed, 1 warnings, 0 aborted, 0 skipped, 0 info only. ==================================================================================================== ACPI DSDT Method Semantic Tests. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP Failed to install global event handler. Test 22 of 93: Check _PSR (Power Source). ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 22, Detected an infinite loop when evaluating method '\_SB_.AC__._PSR'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. PASSED: Test 22, \_SB_.AC__._PSR correctly acquired and released locks 16 times. Test 35 of 93: Check _TMP (Thermal Zone Current Temp). ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 35, Detected an infinite loop when evaluating method '\_TZ_.DTSZ._TMP'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. PASSED: Test 35, \_TZ_.DTSZ._TMP correctly acquired and released locks 14 times. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 35, Detected an infinite loop when evaluating method '\_TZ_.CPUZ._TMP'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. PASSED: Test 35, \_TZ_.CPUZ._TMP correctly acquired and released locks 10 times. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 35, Detected an infinite loop when evaluating method '\_TZ_.SKNZ._TMP'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. PASSED: Test 35, \_TZ_.SKNZ._TMP correctly acquired and released locks 10 times. PASSED: Test 35, _TMP correctly returned sane looking value 0x00000b4c (289.2 degrees K) PASSED: Test 35, \_TZ_.BATZ._TMP correctly acquired and released locks 9 times. PASSED: Test 35, _TMP correctly returned sane looking value 0x00000aac (273.2 degrees K) PASSED: Test 35, \_TZ_.FDTZ._TMP correctly acquired and released locks 7 times. Test 46 of 93: Check _DIS (Disable). FAILED [MEDIUM] MethodShouldReturnNothing: Test 46, \_SB_.PCI0.LPCB.SIO_.COM1._DIS returned values, but was expected to return nothing. Object returned: INTEGER: 0x00000000 ADVICE: This probably won't cause any errors, but it should be fixed as the AML code is not conforming to the expected behaviour as described in the ACPI specification. FAILED [MEDIUM] MethodShouldReturnNothing: Test 46, \_SB_.PCI0.LPCB.SIO_.LPT0._DIS returned values, but was expected to return nothing. Object returned: INTEGER: 0x00000000 ADVICE: This probably won't cause any errors, but it should be fixed as the AML code is not conforming to the expected behaviour as described in the ACPI specification. Test 61 of 93: Check _WAK (System Wake). Test _WAK(1) System Wake, State S1. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 61, Detected an infinite loop when evaluating method '\_WAK'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. Test _WAK(2) System Wake, State S2. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 61, Detected an infinite loop when evaluating method '\_WAK'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. Test _WAK(3) System Wake, State S3. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 61, Detected an infinite loop when evaluating method '\_WAK'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. Test _WAK(4) System Wake, State S4. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 61, Detected an infinite loop when evaluating method '\_WAK'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. Test _WAK(5) System Wake, State S5. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 61, Detected an infinite loop when evaluating method '\_WAK'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. Test 87 of 93: Check _BCL (Query List of Brightness Control Levels Supported). Package has 2 elements: 00: INTEGER: 0x00000000 01: INTEGER: 0x00000000 FAILED [MEDIUM] Method_BCLElementCount: Test 87, Method _BCL should return a package of more than 2 integers, got just 2. Test 88 of 93: Check _BCM (Set Brightness Level). ACPICA Exception AE_AML_PACKAGE_LIMIT during execution of method _BCM FAILED [CRITICAL] AEAMLPackgeLimit: Test 88, Detected error 'Package limit' when evaluating '\_SB_.PCI0.GFX0.DD02._BCM'. ==================================================================================================== ACPI table settings sanity checks. ---------------------------------------------------------------------------------------------------- Test 1 of 1: Check ACPI tables. PASSED: Test 1, Table APIC passed. Table ECDT not present to check. FAILED [MEDIUM] FADT32And64BothDefined: Test 1, FADT 32 bit FIRMWARE_CONTROL is non-zero, and X_FIRMWARE_CONTROL is also non-zero. Section 5.2.9 of the ACPI specification states that if the FIRMWARE_CONTROL is non-zero then X_FIRMWARE_CONTROL must be set to zero. ADVICE: The FADT FIRMWARE_CTRL is a 32 bit pointer that points to the physical memory address of the Firmware ACPI Control Structure (FACS). There is also an extended 64 bit version of this, the X_FIRMWARE_CTRL pointer that also can point to the FACS. Section 5.2.9 of the ACPI specification states that if the X_FIRMWARE_CTRL field contains a non zero value then the FIRMWARE_CTRL field *must* be zero. This error is also detected by the Linux kernel. If FIRMWARE_CTRL and X_FIRMWARE_CTRL are defined, then the kernel just uses the 64 bit version of the pointer. PASSED: Test 1, Table HPET passed. PASSED: Test 1, Table MCFG passed. PASSED: Test 1, Table RSDT passed. PASSED: Test 1, Table RSDP passed. Table SBST not present to check. PASSED: Test 1, Table XSDT passed. ==================================================================================================== Re-assemble DSDT and find syntax errors and warnings. ---------------------------------------------------------------------------------------------------- Test 1 of 2: Disassemble and reassemble DSDT FAILED [HIGH] AMLAssemblerError4043: Test 1, Assembler error in line 2261 Line | AML source ---------------------------------------------------------------------------------------------------- 02258| 0x00000000, // Range Minimum 02259| 0xFEDFFFFF, // Range Maximum 02260| 0x00000000, // Translation Offset 02261| 0x00000000, // Length | ^ | error 4043: Invalid combination of Length and Min/Max fixed flags 02262| ,, _Y0E, AddressRangeMemory, TypeStatic) 02263| DWordMemory (ResourceProducer, PosDecode, MinFixed, MaxFixed, Cacheable, ReadWrite, 02264| 0x00000000, // Granularity ==================================================================================================== ADVICE: (for error #4043): This occurs if the length is zero and just one of the resource MIF/MAF flags are set, or the length is non-zero and resource MIF/MAF flags are both set. These are illegal combinations and need to be fixed. See section 6.4.3.5 Address Space Resource Descriptors of version 4.0a of the ACPI specification for more details. FAILED [HIGH] AMLAssemblerError4050: Test 1, Assembler error in line 2268 Line | AML source ---------------------------------------------------------------------------------------------------- 02265| 0xFEE01000, // Range Minimum 02266| 0xFFFFFFFF, // Range Maximum 02267| 0x00000000, // Translation Offset 02268| 0x011FEFFF, // Length | ^ | error 4050: Length is not equal to fixed Min/Max window 02269| ,, , AddressRangeMemory, TypeStatic) 02270| }) 02271| Method (_CRS, 0, Serialized) ==================================================================================================== ADVICE: (for error #4050): The minimum address is greater than the maximum address. This is illegal. FAILED [HIGH] AMLAssemblerError1104: Test 1, Assembler error in line 8885 Line | AML source ---------------------------------------------------------------------------------------------------- 08882| Method (_DIS, 0, NotSerialized) 08883| { 08884| DSOD (0x02) 08885| Return (0x00) | ^ | warning level 0 1104: Reserved method should not return a value (_DIS) 08886| } 08887| 08888| Method (_SRS, 1, NotSerialized) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1104: Test 1, Assembler error in line 9195 Line | AML source ---------------------------------------------------------------------------------------------------- 09192| Method (_DIS, 0, NotSerialized) 09193| { 09194| DSOD (0x01) 09195| Return (0x00) | ^ | warning level 0 1104: Reserved method should not return a value (_DIS) 09196| } 09197| 09198| Method (_SRS, 1, NotSerialized) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1127: Test 1, Assembler error in line 9242 Line | AML source ---------------------------------------------------------------------------------------------------- 09239| CreateWordField (CRES, \_SB.PCI0.LPCB.SIO.LPT0._CRS._Y21._MAX, MAX2) 09240| CreateByteField (CRES, \_SB.PCI0.LPCB.SIO.LPT0._CRS._Y21._LEN, LEN2) 09241| CreateWordField (CRES, \_SB.PCI0.LPCB.SIO.LPT0._CRS._Y22._INT, IRQ0) 09242| CreateWordField (CRES, \_SB.PCI0.LPCB.SIO.LPT0._CRS._Y23._DMA, DMA0) | ^ | warning level 0 1127: ResourceTag smaller than Field (Tag: 8 bits, Field: 16 bits) 09243| If (RLPD) 09244| { 09245| Store (0x00, Local0) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1128: Test 1, Assembler error in line 18682 Line | AML source ---------------------------------------------------------------------------------------------------- 18679| Store (0x01, Index (DerefOf (Index (Local0, 0x02)), 0x01)) 18680| If (And (WDPE, 0x40)) 18681| { 18682| Wait (\_SB.BEVT, 0x10) | ^ | warning level 0 1128: Result is not used, possible operator timeout will be missed 18683| } 18684| 18685| Store (BRID, Index (DerefOf (Index (Local0, 0x02)), 0x02)) ==================================================================================================== ADVICE: (for warning level 0 #1128): The operation can possibly timeout, and hence the return value indicates an timeout error. However, because the return value is not checked this very probably indicates that the code is buggy. A possible scenario is that a mutex times out and the code attempts to access data in a critical region when it should not. This will lead to undefined behaviour. This should be fixed. Table DSDT (0) reassembly: Found 2 errors, 4 warnings. Test 2 of 2: Disassemble and reassemble SSDT PASSED: Test 2, SSDT (0) reassembly, Found 0 errors, 0 warnings. FAILED [HIGH] AMLAssemblerError1104: Test 2, Assembler error in line 60 Line | AML source ---------------------------------------------------------------------------------------------------- 00057| { 00058| Store (CPDC (Arg0), Local0) 00059| GCAP (Local0) 00060| Return (Local0) | ^ | warning level 0 1104: Reserved method should not return a value (_PDC) 00061| } 00062| 00063| Method (_OSC, 4, NotSerialized) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1104: Test 2, Assembler error in line 174 Line | AML source ---------------------------------------------------------------------------------------------------- 00171| { 00172| Store (\_PR.CPU0.CPDC (Arg0), Local0) 00173| GCAP (Local0) 00174| Return (Local0) | ^ | warning level 0 1104: Reserved method should not return a value (_PDC) 00175| } 00176| 00177| Method (_OSC, 4, NotSerialized) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1104: Test 2, Assembler error in line 244 Line | AML source ---------------------------------------------------------------------------------------------------- 00241| { 00242| Store (\_PR.CPU0.CPDC (Arg0), Local0) 00243| GCAP (Local0) 00244| Return (Local0) | ^ | warning level 0 1104: Reserved method should not return a value (_PDC) 00245| } 00246| 00247| Method (_OSC, 4, NotSerialized) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1104: Test 2, Assembler error in line 290 Line | AML source ---------------------------------------------------------------------------------------------------- 00287| { 00288| Store (\_PR.CPU0.CPDC (Arg0), Local0) 00289| GCAP (Local0) 00290| Return (Local0) | ^ | warning level 0 1104: Reserved method should not return a value (_PDC) 00291| } 00292| 00293| Method (_OSC, 4, NotSerialized) ==================================================================================================== Table SSDT (1) reassembly: Found 0 errors, 4 warnings. PASSED: Test 2, SSDT (2) reassembly, Found 0 errors, 0 warnings. PASSED: Test 2, SSDT (3) reassembly, Found 0 errors, 0 warnings. ==================================================================================================== 3 passed, 10 failed, 0 warnings, 0 aborted, 0 skipped, 0 info only. ==================================================================================================== Critical failures: 1 method test, at 1 log line: 1449: Detected error 'Package limit' when evaluating '\_SB_.PCI0.GFX0.DD02._BCM'. High failures: 11 klog test, at 1 log line: 121: HIGH Kernel message: [ 3.512783] ACPI Error: Method parse/execution failed [\_SB_.PCI0.GFX0._DOD] (Node f7425858), AE_AML_PACKAGE_LIMIT (20110623/psparse-536) syntaxcheck test, at 1 log line: 1668: Assembler error in line 2261 syntaxcheck test, at 1 log line: 1687: Assembler error in line 2268 syntaxcheck test, at 1 log line: 1703: Assembler error in line 8885 syntaxcheck test, at 1 log line: 1716: Assembler error in line 9195 syntaxcheck test, at 1 log line: 1729: Assembler error in line 9242 syntaxcheck test, at 1 log line: 1742: Assembler error in line 18682 syntaxcheck test, at 1 log line: 1766: Assembler error in line 60 syntaxcheck test, at 1 log line: 1779: Assembler error in line 174 syntaxcheck test, at 1 log line: 1792: Assembler error in line 244 syntaxcheck test, at 1 log line: 1805: Assembler error in line 290 Medium failures: 9 mtrr test, at 1 log line: 76: Memory range 0xc0000000 to 0xdfffffff (PCI Bus 0000:00) has incorrect attribute Write-Combining. mtrr test, at 1 log line: 78: Memory range 0xfee01000 to 0xffffffff (PCI Bus 0000:00) has incorrect attribute Write-Protect. msr test, at 1 log line: 165: MSR SYSENTER_ESP (0x175) has 1 inconsistent values across 2 CPUs for (shift: 0 mask: 0xffffffffffffffff). msr test, at 1 log line: 173: MSR MISC_ENABLE (0x1a0) has 1 inconsistent values across 2 CPUs for (shift: 0 mask: 0x400c51889). wmi test, at 1 log line: 528: GUID 2B814318-4BE8-4707-9D84-A190A859B5D0 is unknown to the kernel, a driver may need to be implemented for this GUID. method test, at 1 log line: 1002: \_SB_.PCI0.LPCB.SIO_.COM1._DIS returned values, but was expected to return nothing. method test, at 1 log line: 1011: \_SB_.PCI0.LPCB.SIO_.LPT0._DIS returned values, but was expected to return nothing. method test, at 1 log line: 1443: Method _BCL should return a package of more than 2 integers, got just 2. acpitables test, at 1 log line: 1643: FADT 32 bit FIRMWARE_CONTROL is non-zero, and X_FIRMWARE_CONTROL is also non-zero. Se

    Read the article

  • Automatic Standby Recreation for Data Guard

    - by pablo.boixeda(at)oracle.com
    Hi,Unfortunately sometimes a Standby Instance needs to be recreated. This can happen for many reasons such as lost archive logs, standby data files, failover, among others.This is why we wanted to have one script to recreate standby instances in an easy way.This script recreates the standby considering some prereqs:-Database Version should be at least 11gR1-Dummy instance started on the standby node (Seeking to improve this so it won't be needed)-Broker configuration hasn't been removed-In our case we have two TNSNAMES files, one for the Standby creation (using SID) and the other one for production using service names (including broker service name)-Some environment variables set up by the environment db script (like ORACLE_HOME, PATH...)-The directory tree should not have been modified in the stanby hostWe are currently using it on our 11gR2 Data Guard tests.Any improvements will be welcome! Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} #!/bin/ksh ###    NOMBRE / VERSION ###       recrea_dg.sh   v.1.00 ### ###    DESCRIPCION ###       reacreacion de la Standby ### ###    DEVUELVE ###       0 Creacion de STANDBY correcta ###       1 Fallo ### ###    NOTAS ###       Este shell script NO DEBE MODIFICARSE. ###       Todas las variables y constantes necesarias se toman del entorno. ### ###    MODIFICADO POR:    FECHA:        COMENTARIOS: ###    ---------------    ----------    ------------------------------------- ###      Oracle           15/02/2011    Creacion. ### ### ### Cargar entorno ### V_ADMIN_DIR=`dirname $0` . ${V_ADMIN_DIR}/entorno_bd.sh 1>>/dev/null if [ $? -ne 0 ] then   echo "Error Loading the environment."   exit 1 fi V_RET=0 V_DATE=`/bin/date` V_DATE_F=`/bin/date +%Y%m%d_%H%M%S` V_LOGFILE=${V_TRAZAS}/recrea_dg_${V_DATE_F}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1 ### ### Variables para Recrear el Data Guard ### V_DB_BR=`echo ${V_DB_NAME}|tr '[:lower:]' '[:upper:]'` if [ "${ORACLE_SID}" = "${V_DB_NAME}01" ] then         V_LOCAL_BR=${V_DB_BR}'01'         V_REMOTE_BR=${V_DB_BR}'02' else         V_LOCAL_BR=${V_DB_BR}'02'         V_REMOTE_BR=${V_DB_BR}'01' fi echo " Getting local instance ROLE ${ORACLE_SID} ..." sqlplus -s /nolog 1>>/dev/null 2>&1 <<-! whenever sqlerror exit 1 connect / as sysdba variable salida number declare   v_database_role v\$database.database_role%type; begin   select database_role into v_database_role from v\$database;   :salida := case v_database_role        when 'PRIMARY' then 2        when 'PHYSICAL STANDBY' then 3        else 4      end; end; / exit :salida ! case $? in 1) echo " ERROR: Cannot get instance ROLE ." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; 2) echo " Local Instance with PRIMARY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=PRIMARY ;; 3) echo " Local Instance with PHYSICAL STANDBY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=STANDBY ;; *) echo " ERROR: UNKNOWN ROLE." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; esac if [ "${V_DB_ROLE_LCL}" = "PRIMARY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_REMOTE_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_LOCAL_BR}         V_STANDBY=${V_REMOTE_BR} fi if [ "${V_DB_ROLE_LCL}" = "STANDBY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_LOCAL_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_REMOTE_BR}         V_STANDBY=${V_LOCAL_BR} fi # Cargamos las variables de los hosts # Cargamos las variables de los hosts PRY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_PRIMARY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` SBY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` echo "el HOST primary es: ${PRY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "el HOST standby es: ${SBY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## V_DATE=`/bin/date` echo "${V_DATE} - Shutting down Standby instance" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## SBY_STATUS=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',status from v\\$instance; EOF` if [ ${SBY_STATUS} = 'STARTED' ] || [ ${SBY_STATUS} = 'MOUNTED' ] || [ ${SBY_STATUS} = 'OPEN' ] then         echo "${V_DATE} - Standby instance shutdown in progress..." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1         sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         shutdown abort         ! fi V_DATE=`/bin/date` echo "" echo "${V_DATE} - Standby instance stopped" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Eliminamos los ficheros de la base de datos ## V_SBY_SID=`echo ${V_STANDBY}|tr '[:upper:]' '[:lower:]'` V_PRY_SID=`echo ${V_PRIMARY}|tr '[:upper:]' '[:lower:]'` ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/data/*.dbf ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch/*.arc ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.rdo ## ## Startup nomount stby instance ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting  DUMMY Standby Instance " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${SBY_HOST} touch /home/oracle/init_dg.ora ssh ${SBY_HOST} 'echo "DB_NAME='${V_DB_NAME}'">>/home/oracle/init_dg.ora' ssh ${SBY_HOST} touch /home/oracle/start_dummy.sh ssh ${SBY_HOST} 'echo "ORACLE_HOME=/opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_HOME">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "PATH=\$ORACLE_HOME/bin:\$PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "ORACLE_SID='${V_SBY_SID}'">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_SID">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "sqlplus -s /nolog <<-!" >>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      whenever sqlerror exit 1 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      connect / as sysdba ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      startup nomount pfile='\''/home/oracle/init_dg.ora'\''">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "! ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'chmod 744 /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'sh /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/init_dg.ora' ## ## TNSNAMES change, specific for RMAN duplicate ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Setting up TNSNAMES in PRIMARY host " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.inst  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting STANDBY creation with RMAN.. " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 rman<<-! >>${V_LOGFILE} connect target sys/${V_DB_PWD}@${V_PRIMARY} connect auxiliary sys/${V_DB_PWD}@${V_STANDBY} run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database dorecover spfile parameter_value_convert '${V_PRY_SID}','${V_SBY_SID}' set control_files='/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/control01.ctl','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/control02.ctl' set db_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set log_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set 'db_unique_name'='${V_SBY_SID}' set log_archive_config='DG_CONFIG=(${V_PRIMARY},${V_STANDBY})' set fal_client='${V_STANDBY}' set fal_server='${V_PRIMARY}' set log_archive_dest_1='LOCATION=/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch DB_UNIQUE_NAME=${V_SBY_SID} MANDATORY VALID_FOR=(ALL_LOGFILES,ALL_ROLES)' set log_archive_dest_2='SERVICE="${V_PRIMARY}"','SYNC AFFIRM DB_UNIQUE_NAME=${V_PRY_SID} DELAY=0 MAX_FAILURE=0 REOPEN=300 REGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)' nofilenamecheck ; } ! V_DATE=`/bin/date` if [ $? -ne 0 ] then         echo ""         echo "${V_DATE} - Error creating STANDBY instance"         echo ""         echo "********************************************************************************" else         echo ""         echo "${V_DATE} - STANDBY instance created SUCCESSFULLY "         echo ""         echo "********************************************************************************" fi sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=${SBY_HOST})(PORT=1544))' scope=both;         alter system set service_names='${V_DB_NAME}.eu.roca.net,${V_SBY_SID}.eu.roca.net,${V_SBY_SID}_DGMGRL.eu.roca.net' scope=both;         alter database recover managed standby database using current logfile disconnect from session;         alter system set dg_broker_start=true scope=both; ! ## ## TNSNAMES change, back to Production Mode ## V_DATE=`/bin/date` echo " " | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Restoring TNSNAMES in PRIMARY "  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.prod  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} -  Waiting for media recovery before check the DATA GUARD Broker"  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 sleep 200 dgmgrl <<-! | grep SUCCESS 1>/dev/null 2>&1     connect ${V_DB_USR}/${V_DB_PWD}@${V_STANDBY}     show configuration verbose; ! if [ $? -ne 0 ] ; then         echo "       ERROR: El status del Broker no es SUCCESS" | tee -a ${V_LOGFILE}   2>&1 ;         V_RET=1 else          echo "      DATA GUARD OK " | tee -a ${V_LOGFILE}   2>&1 ; Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}         V_RET=0 fi Hope it helps.

    Read the article

  • BES Express - configure MDS to push messages from 3rd party web application

    - by Max Gontar
    Hi! I have developed IIS web service to send PAP messages using Blackberry Push API over MDS. And there is an application installed on device, configured to receive push messages on appropriate port. Everything works well on MDS simulator. But it's not working well in real environment: I have installed BES Express and register several devices. I can browse MDS url with appropriate port, so url is correct. Also port enabled for reliable pushes is used in push message and in device application. Here is MDS simulator log: <2011-01-12 14:00:03.456 EET>:[272]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = PapServlet: request from 0:0:0:0:0:0:0:1 564 bytes...> <2011-01-12 14:00:03.476 EET>:[273]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Mapping PAP request to push request for pushID:pushID:asdas> <2011-01-12 14:00:03.479 EET>:[274]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = PushServlet: POST request from [UNKNOWN @ 0:0:0:0:0:0:0:1] to [PAPDEST=WAPPUSH%3D2100000A%253A100%2FTYPE%3DUSER%40rim.net&PORT=100&REQUESTURI=/] : -1 bytes...> <2011-01-12 14:00:03.480 EET>:[275]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = submitting push message with id:pushID:asdas> <2011-01-12 14:00:03.482 EET>:[276]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Executing push submit command for pushID:pushID:asdas> <2011-01-12 14:00:03.483 EET>:[278]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Pushing message to: 2100000a> <2011-01-12 14:00:03.484 EET>:[279]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Number of active push connections:1> <2011-01-12 14:00:03.489 EET>:[280]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = added server-initiated connection = -872546301, push id = pushID:asdas> <2011-01-12 14:00:03.491 EET>:[281]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Available threads in DefaultJobPool = 9 running JobRunner: DefaultJobRunner-7> <2011-01-12 14:00:03.494 EET>:[282]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = ReceivedFromServer, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION => <2011-01-12 14:00:03.494 EET>:[282]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = ReceivedFromServer, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Transmission Line Section]:> <2011-01-12 14:00:03.494 EET>:[282]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = ReceivedFromServer, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = POST / HTTP/1.1> <2011-01-12 14:00:03.494 EET>:[282]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = ReceivedFromServer, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Headers Section]: 8 headers> <2011-01-12 14:00:03.494 EET>:[282]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = ReceivedFromServer, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Parameters Section]: 3 parameters> <2011-01-12 14:00:03.499 EET>:[283]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = SentToDevice, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION => <2011-01-12 14:00:03.499 EET>:[283]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = SentToDevice, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Transmission Line Section]:> <2011-01-12 14:00:03.499 EET>:[283]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = SentToDevice, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = POST / HTTP/1.1> <2011-01-12 14:00:03.499 EET>:[283]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = SentToDevice, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Headers Section]: 9 headers> <2011-01-12 14:00:03.499 EET>:[283]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = SentToDevice, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Parameters Section]: 3 parameters> <2011-01-12 14:00:03.501 EET>:[284]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Finished JobRunner: DefaultJobRunner-7, available threads in DefaultJobPool = 10, time spent = 8ms> <2011-01-12 14:00:03.521 EET>:[287]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, EVENT = CreatedSendingQueue, DEVICEPIN = 2100000a> <2011-01-12 14:00:03.526 EET>:[290]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, EVENT = Sending, TAG = 1288699908, DEVICEPIN = 2100000a, VERSION = 16, CONNECTIONID = -872546301, SEQUENCE = 0, TYPE = NOTIFY-REQUEST, CONNECTIONHANDLER = http, PROTOCOL = TCP, PARAMETERS = [MGONTAR/10.10.0.35:100], SIZE = 339> <2011-01-12 14:00:03.531 EET>:[291]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Number of active push connections:0> <2011-01-12 14:00:03.591 EET>:[292]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, EVENT = Notification, TAG = 1288699908, STATE = DELIVERED> <2011-01-12 14:00:03.600 EET>:[296]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Device connections: AVG latency (msecs)79> <2011-01-12 14:00:03.600 EET>:[297]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, Removed push connection:-872546301> <2011-01-12 14:00:07.015 EET>:[298]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, EVENT = RemovedSendingQueue, DEVICEPIN = 2100000a> And here is real MDS log: <2011-01-12 11:35:02.763 GMT>:[3932]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, PapServlet: request from 192.168.1.241 583 bytes...> <2011-01-12 11:35:02.897 GMT>:[3933]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Mapping PAP request to push request for pushID:pushID:sdfsdfwerwer> <2011-01-12 11:35:02.909 GMT>:[3934]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, PushServlet: POST request from [UNKNOWN @ 192.168.1.241] to [PAPDEST=WAPPUSH%3D22D7F6BD%253A7874%2FTYPE%3DUSER%40rim.net&PORT=7874&REQUESTURI=/]> <2011-01-12 11:35:02.909 GMT>:[3934]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<push id: pushID:sdfsdfwerwer> <2011-01-12 11:35:02.910 GMT>:[3935]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, submitting push message with id:pushID:sdfsdfwerwer> <2011-01-12 11:35:02.910 GMT>:[3936]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Executing push submit command for pushID:pushID:sdfsdfwerwer> <2011-01-12 11:35:02.911 GMT>:[3937]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Pushing message to: 22d7f6bd> <2011-01-12 11:35:02.912 GMT>:[3938]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Number of active push connections:1> <2011-01-12 11:35:02.931 GMT>:[3939]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, added server-initiated connection = -1848311806, push id = pushID:sdfsdfwerwer> <2011-01-12 11:35:03.240 GMT>:[3940]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = IPPP, EVENT = CreatedSendingQueue, DEVICEPIN = 22d7f6bd, USERID = u3> <2011-01-12 11:35:03.241 GMT>:[3941]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = IPPP, EVENT = Sending, TAG = 536543251, DEVICEPIN = 22d7f6bd, USERID = u3, VERSION = 16, CONNECTIONID = -1848311806, SEQUENCE = 0, TYPE = NOTIFY-REQUEST, CONNECTIONHANDLER = http, PROTOCOL = TCP, PARAMETERS = [LDN-Server1/192.168.1.240:7874], SIZE = 383> <2011-01-12 11:35:03.241 GMT>:[3942]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Number of active push connections:0> <2011-01-12 11:35:03.253 GMT>:[3943]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SRP, SRPID = S27700165[LDN-SERVER1:3200], EVENT = Sending, VERSION = 1, COMMAND = SEND, TAG = 536543251, SIZE = 570> <2011-01-12 11:35:03.838 GMT>:[3944]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SRP, SRPID = S27700165[LDN-SERVER1:3200], EVENT = Receiving, VERSION = 1, COMMAND = STATUS, TAG = 536543251, SIZE = 10, STATE = DELIVERED> <2011-01-12 11:35:04.104 GMT>:[3945]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = IPPP, EVENT = Notification, TAG = 536543251, STATE = DELIVERED> <2011-01-12 11:35:04.121 GMT>:[3946]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Device connections: AVG latency (msecs)893> <2011-01-12 11:35:04.135 GMT>:[3947]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<INFO >:<LAYER = IPPP, DEVICEPIN = 22d7f6bd, DOMAINNAME = LDN-Server1/192.168.1.240, CONNECTION_TYPE = PUSH_CONN, ConnectionId = -1848311806, DURATION(ms) = 1151, MFH_KBytes = 0, MTH_KBytes = 0.374, MFH_PACKET_COUNT = 0, MTH_PACKET_COUNT = 1> <2011-01-12 11:35:04.144 GMT>:[3948]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = IPPP, Removed push connection:-1848311806> <2011-01-12 11:35:09.264 GMT>:[3949]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = IPPP, EVENT = RemovedSendingQueue, DEVICEPIN = 22d7f6bd, USERID = u3> <2011-01-12 11:35:58.187 GMT>:[3950]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SRP, SRPID = S27700165[LDN-SERVER1:3200], EVENT = Sending, VERSION = 1, COMMAND = INFO, SIZE = 46> <2011-01-12 11:35:58.187 GMT>:[3951]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Sent health to S27700165[LDN-SERVER1:3200] Health=[0x 0000 0007 0000 0000],Mask=[0x 0000 0007 0000 0000],Load=[60]> As you can see, logs not really differs, message is marked as delivered. But my app on device not really gets this message (as it works in mds simulator) Please advice me, what may be wrong? Is there some certificate to install or security settings I should configure to make this push message came to device application? Thank you! same question on bbforums

    Read the article

  • Inbound SIP calls through Cisco 881 NAT hang up after a few seconds

    - by MasterRoot24
    I've recently moved to a Cisco 881 router for my WAN link. I was previously using a Cisco Linksys WAG320N as my modem/router/WiFi AP/NAT firewall. The WAG320N is now running in bridged mode, so it's simply acting as a modem with one of it's LAN ports connected to FE4 WAN on my Cisco 881. The Cisco 881 get's a DHCP provided IP from my ISP. My LAN is part of default Vlan 1 (192.168.1.0/24). General internet connectivity is working great, I've managed to setup static NAT rules for my HTTP/HTTPS/SMTP/etc. services which are running on my LAN. I don't know whether it's worth mentioning that I've opted to use NVI NAT (ip nat enable as opposed to the traditional ip nat outside/ip nat inside) setup. My reason for this is that NVI allows NAT loopback from my LAN to the WAN IP and back in to the necessary server on the LAN. I run an Asterisk 1.8 PBX on my LAN, which connects to a SIP provider on the internet. Both inbound and outbound calls through the old setup (WAG320N providing routing/NAT) worked fine. However, since moving to the Cisco 881, inbound calls drop after around 10 seconds, whereas outbound calls work fine. The following message is logged on my Asterisk PBX: [Dec 9 15:27:45] WARNING[27734]: chan_sip.c:3641 retrans_pkt: Retransmission timeout reached on transmission [email protected] for seqno 1 (Critical Response) -- See https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions Packet timed out after 6528ms with no response [Dec 9 15:27:45] WARNING[27734]: chan_sip.c:3670 retrans_pkt: Hanging up call [email protected] - no reply to our critical packet (see https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions). (I know that this is quite a common issue - I've spend the best part of 2 days solid on this, trawling Google.) I've done as I am told and checked https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions. Referring to the section "Other SIP requests" in the page linked above, I believe that the hangup to be caused by the ACK from my SIP provider not being passed back through NAT to Asterisk on my PBX. I tried to ascertain this by dumping the packets on my WAN interface on the 881. I managed to obtain a PCAP dump of packets in/out of my WAN interface. Here's an example of an ACK being reveived by the router from my provider: 689 21.219999 193.x.x.x 188.x.x.x SIP 502 Request: ACK sip:[email protected] | However a SIP trace on the Asterisk server show's that there are no ACK's received in response to the 200 OK from my PBX: http://pastebin.com/wwHpLPPz In the past, I have been strongly advised to disable any sort of SIP ALGs on routers and/or firewalls and the many posts regarding this issue on the internet seem to support this. However, I believe on Cisco IOS, the config command to disable SIP ALG is no ip nat service sip udp port 5060 however, this doesn't appear to help the situation. To confirm that config setting is set: Router1#show running-config | include sip no ip nat service sip udp port 5060 Another interesting twist: for a short period of time, I tried another provider. Luckily, my trial account with them is still available, so I reverted my Asterisk config back to the revision before I integrated with my current provider. I then dialled in to the DDI associated with the trial trunk and the call didn't get hung up and I didn't get the error above! To me, this points at the provider, however I know, like all providers do, will say "There's no issues with our SIP proxies - it's your firewall." I'm tempted to agree with this, as this issue was not apparent with the old WAG320N router when it was doing the NAT'ing. I'm sure you'll want to see my running-config too: ! ! Last configuration change at 15:55:07 UTC Sun Dec 9 2012 by xxx version 15.2 no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug datetime msec localtime show-timezone service timestamps log datetime msec localtime show-timezone no service password-encryption service sequence-numbers ! hostname Router1 ! boot-start-marker boot-end-marker ! ! security authentication failure rate 10 log security passwords min-length 6 logging buffered 4096 logging console critical enable secret 4 xxx ! aaa new-model ! ! aaa authentication login local_auth local ! ! ! ! ! aaa session-id common ! memory-size iomem 10 ! crypto pki trustpoint TP-self-signed-xxx enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-xxx revocation-check none rsakeypair TP-self-signed-xxx ! ! crypto pki certificate chain TP-self-signed-xxx certificate self-signed 01 quit no ip source-route no ip gratuitous-arps ip auth-proxy max-login-attempts 5 ip admission max-login-attempts 5 ! ! ! ! ! no ip bootp server ip domain name dmz.merlin.local ip domain list dmz.merlin.local ip domain list merlin.local ip name-server x.x.x.x ip inspect audit-trail ip inspect udp idle-time 1800 ip inspect dns-timeout 7 ip inspect tcp idle-time 14400 ip inspect name autosec_inspect ftp timeout 3600 ip inspect name autosec_inspect http timeout 3600 ip inspect name autosec_inspect rcmd timeout 3600 ip inspect name autosec_inspect realaudio timeout 3600 ip inspect name autosec_inspect smtp timeout 3600 ip inspect name autosec_inspect tftp timeout 30 ip inspect name autosec_inspect udp timeout 15 ip inspect name autosec_inspect tcp timeout 3600 ip cef login block-for 3 attempts 3 within 3 no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO881-SEC-K9 sn ! ! username xxx privilege 15 secret 4 xxx username xxx secret 4 xxx ! ! ! ! ! ip ssh time-out 60 ! ! ! ! ! ! ! ! ! interface FastEthernet0 no ip address ! interface FastEthernet1 no ip address ! interface FastEthernet2 no ip address ! interface FastEthernet3 switchport access vlan 2 no ip address ! interface FastEthernet4 ip address dhcp no ip redirects no ip unreachables no ip proxy-arp ip nat enable duplex auto speed auto ! interface Vlan1 ip address 192.168.1.1 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip nat enable ! interface Vlan2 ip address 192.168.0.2 255.255.255.0 ! ip forward-protocol nd ip http server ip http access-class 1 ip http authentication local ip http secure-server ip http timeout-policy idle 60 life 86400 requests 10000 ! ! no ip nat service sip udp port 5060 ip nat source list 1 interface FastEthernet4 overload ip nat source static tcp x.x.x.x 80 interface FastEthernet4 80 ip nat source static tcp x.x.x.x 443 interface FastEthernet4 443 ip nat source static tcp x.x.x.x 25 interface FastEthernet4 25 ip nat source static tcp x.x.x.x 587 interface FastEthernet4 587 ip nat source static tcp x.x.x.x 143 interface FastEthernet4 143 ip nat source static tcp x.x.x.x 993 interface FastEthernet4 993 ip nat source static tcp x.x.x.x 1723 interface FastEthernet4 1723 ! ! logging trap debugging logging facility local2 access-list 1 permit 192.168.1.0 0.0.0.255 access-list 1 permit 192.168.0.0 0.0.0.255 no cdp run ! ! ! ! control-plane ! ! banner motd Authorized Access only ! line con 0 login authentication local_auth length 0 transport output all line aux 0 exec-timeout 15 0 login authentication local_auth transport output all line vty 0 1 access-class 1 in logging synchronous login authentication local_auth length 0 transport preferred none transport input telnet transport output all line vty 2 4 access-class 1 in login authentication local_auth length 0 transport input ssh transport output all ! ! end ...and, if it's of any use, here's my Asterisk SIP config: [general] context=default ; Default context for calls allowoverlap=no ; Disable overlap dialing support. (Default is yes) udpbindaddr=0.0.0.0 ; IP address to bind UDP listen socket to (0.0.0.0 binds to all) ; Optionally add a port number, 192.168.1.1:5062 (default is port 5060) tcpenable=no ; Enable server for incoming TCP connections (default is no) tcpbindaddr=0.0.0.0 ; IP address for TCP server to bind to (0.0.0.0 binds to all interfaces) ; Optionally add a port number, 192.168.1.1:5062 (default is port 5060) srvlookup=yes ; Enable DNS SRV lookups on outbound calls ; Note: Asterisk only uses the first host ; in SRV records ; Disabling DNS SRV lookups disables the ; ability to place SIP calls based on domain ; names to some other SIP users on the Internet ; Specifying a port in a SIP peer definition or ; when dialing outbound calls will supress SRV ; lookups for that peer or call. directmedia=no ; Don't allow direct RTP media between extensions (doesn't work through NAT) externhost=<MY DYNDNS HOSTNAME> ; Our external hostname to resolve to IP and be used in NAT'ed packets localnet=192.168.1.0/24 ; Define our local network so we know which packets need NAT'ing qualify=yes ; Qualify peers by default dtmfmode=rfc2833 ; Set the default DTMF mode disallow=all ; Disallow all codecs by default allow=ulaw ; Allow G.711 u-law allow=alaw ; Allow G.711 a-law ; ---------------------- ; SIP Trunk Registration ; ---------------------- ; Orbtalk register => <MY SIP PROVIDER USER NAME>:[email protected]/<MY DDI> ; Main Orbtalk number ; ---------- ; Trunks ; ---------- [orbtalk] ; Main Orbtalk trunk type=peer insecure=invite host=sipgw3.orbtalk.co.uk nat=yes username=<MY SIP PROVIDER USER NAME> defaultuser=<MY SIP PROVIDER USER NAME> fromuser=<MY SIP PROVIDER USER NAME> secret=xxx context=inbound I really don't know where to go with this. If anyone can help me find out why these calls are being dropped off, I'd be grateful if you could chime in! Please let me know if any further info is required.

    Read the article

  • Implementation of ZipCrypto / Zip 2.0 encryption in java

    - by gomesla
    I'm trying o implement the zipcrypto / zip 2.0 encryption algoritm to deal with encrypted zip files as discussed in http://www.pkware.com/documents/casestudies/APPNOTE.TXT I believe I've followed the specs but just can't seem to get it working. I'm fairly sure the issue has to do with my interpretation of the crc algorithm. The documentation states CRC-32: (4 bytes) The CRC-32 algorithm was generously contributed by David Schwaderer and can be found in his excellent book "C Programmers Guide to NetBIOS" published by Howard W. Sams & Co. Inc. The 'magic number' for the CRC is 0xdebb20e3. The proper CRC pre and post conditioning is used, meaning that the CRC register is pre-conditioned with all ones (a starting value of 0xffffffff) and the value is post-conditioned by taking the one's complement of the CRC residual. Here is the snippet that I'm using for the crc32 public class PKZIPCRC32 { private static final int CRC32_POLYNOMIAL = 0xdebb20e3; private int crc = 0xffffffff; private int CRCTable[]; public PKZIPCRC32() { buildCRCTable(); } private void buildCRCTable() { int i, j; CRCTable = new int[256]; for (i = 0; i <= 255; i++) { crc = i; for (j = 8; j > 0; j--) if ((crc & 1) == 1) crc = (crc >>> 1) ^ CRC32_POLYNOMIAL; else crc >>>= 1; CRCTable[i] = crc; } } private int crc32(byte buffer[], int start, int count, int lastcrc) { int temp1, temp2; int i = start; crc = lastcrc; while (count-- != 0) { temp1 = crc >>> 8; temp2 = CRCTable[(crc ^ buffer[i++]) & 0xFF]; crc = temp1 ^ temp2; } return crc; } public int crc32(int crc, byte buffer) { return crc32(new byte[] { buffer }, 0, 1, crc); } } Below is my complete code. Can anyone see what I'm doing wrong. package org.apache.commons.compress.archivers.zip; import java.io.IOException; import java.io.InputStream; public class ZipCryptoInputStream extends InputStream { public class PKZIPCRC32 { private static final int CRC32_POLYNOMIAL = 0xdebb20e3; private int crc = 0xffffffff; private int CRCTable[]; public PKZIPCRC32() { buildCRCTable(); } private void buildCRCTable() { int i, j; CRCTable = new int[256]; for (i = 0; i <= 255; i++) { crc = i; for (j = 8; j > 0; j--) if ((crc & 1) == 1) crc = (crc >>> 1) ^ CRC32_POLYNOMIAL; else crc >>>= 1; CRCTable[i] = crc; } } private int crc32(byte buffer[], int start, int count, int lastcrc) { int temp1, temp2; int i = start; crc = lastcrc; while (count-- != 0) { temp1 = crc >>> 8; temp2 = CRCTable[(crc ^ buffer[i++]) & 0xFF]; crc = temp1 ^ temp2; } return crc; } public int crc32(int crc, byte buffer) { return crc32(new byte[] { buffer }, 0, 1, crc); } } private static final long ENCRYPTION_KEY_1 = 0x12345678; private static final long ENCRYPTION_KEY_2 = 0x23456789; private static final long ENCRYPTION_KEY_3 = 0x34567890; private InputStream baseInputStream = null; private final PKZIPCRC32 checksumEngine = new PKZIPCRC32(); private long[] keys = null; public ZipCryptoInputStream(ZipArchiveEntry zipEntry, InputStream inputStream, String passwd) throws Exception { baseInputStream = inputStream; // Decryption // ---------- // PKZIP encrypts the compressed data stream. Encrypted files must // be decrypted before they can be extracted. // // Each encrypted file has an extra 12 bytes stored at the start of // the data area defining the encryption header for that file. The // encryption header is originally set to random values, and then // itself encrypted, using three, 32-bit keys. The key values are // initialized using the supplied encryption password. After each byte // is encrypted, the keys are then updated using pseudo-random number // generation techniques in combination with the same CRC-32 algorithm // used in PKZIP and described elsewhere in this document. // // The following is the basic steps required to decrypt a file: // // 1) Initialize the three 32-bit keys with the password. // 2) Read and decrypt the 12-byte encryption header, further // initializing the encryption keys. // 3) Read and decrypt the compressed data stream using the // encryption keys. // Step 1 - Initializing the encryption keys // ----------------------------------------- // // Key(0) <- 305419896 // Key(1) <- 591751049 // Key(2) <- 878082192 // // loop for i <- 0 to length(password)-1 // update_keys(password(i)) // end loop // // Where update_keys() is defined as: // // update_keys(char): // Key(0) <- crc32(key(0),char) // Key(1) <- Key(1) + (Key(0) & 000000ffH) // Key(1) <- Key(1) * 134775813 + 1 // Key(2) <- crc32(key(2),key(1) >> 24) // end update_keys // // Where crc32(old_crc,char) is a routine that given a CRC value and a // character, returns an updated CRC value after applying the CRC-32 // algorithm described elsewhere in this document. keys = new long[] { ENCRYPTION_KEY_1, ENCRYPTION_KEY_2, ENCRYPTION_KEY_3 }; for (int i = 0; i < passwd.length(); ++i) { update_keys((byte) passwd.charAt(i)); } // Step 2 - Decrypting the encryption header // ----------------------------------------- // // The purpose of this step is to further initialize the encryption // keys, based on random data, to render a plaintext attack on the // data ineffective. // // Read the 12-byte encryption header into Buffer, in locations // Buffer(0) thru Buffer(11). // // loop for i <- 0 to 11 // C <- buffer(i) ^ decrypt_byte() // update_keys(C) // buffer(i) <- C // end loop // // Where decrypt_byte() is defined as: // // unsigned char decrypt_byte() // local unsigned short temp // temp <- Key(2) | 2 // decrypt_byte <- (temp * (temp ^ 1)) >> 8 // end decrypt_byte // // After the header is decrypted, the last 1 or 2 bytes in Buffer // should be the high-order word/byte of the CRC for the file being // decrypted, stored in Intel low-byte/high-byte order. Versions of // PKZIP prior to 2.0 used a 2 byte CRC check; a 1 byte CRC check is // used on versions after 2.0. This can be used to test if the password // supplied is correct or not. byte[] encryptionHeader = new byte[12]; baseInputStream.read(encryptionHeader); for (int i = 0; i < encryptionHeader.length; i++) { encryptionHeader[i] ^= decrypt_byte(); update_keys(encryptionHeader[i]); } } protected byte decrypt_byte() { byte temp = (byte) (keys[2] | 2); return (byte) ((temp * (temp ^ 1)) >> 8); } @Override public int read() throws IOException { // // Step 3 - Decrypting the compressed data stream // ---------------------------------------------- // // The compressed data stream can be decrypted as follows: // // loop until done // read a character into C // Temp <- C ^ decrypt_byte() // update_keys(temp) // output Temp // end loop int read = baseInputStream.read(); read ^= decrypt_byte(); update_keys((byte) read); return read; } private final void update_keys(byte ch) { keys[0] = checksumEngine.crc32((int) keys[0], ch); keys[1] = keys[1] + (byte) keys[0]; keys[1] = keys[1] * 134775813 + 1; keys[2] = checksumEngine.crc32((int) keys[2], (byte) (keys[1] >> 24)); } }

    Read the article

  • Combining MVVM Light Toolkit and Unity 2.0

    - by Alan Cordner
    This is more of a commentary than a question, though feedback would be nice. I have been tasked to create the user interface for a new project we are doing. We want to use WPF and I wanted to learn all of the modern UI design techniques available. Since I am fairly new to WPF I have been researching what is available. I think I have pretty much settled on using MVVM Light Toolkit (mainly because of its "Blendability" and the EventToCommand behavior!), but I wanted to incorporate IoC also. So, here is what I have come up with. I have modified the default ViewModelLocator class in a MVVM Light project to use a UnityContainer to handle dependency injections. Considering I didn't know what 90% of these terms meant 3 months ago, I think I'm on the right track. // Example of MVVM Light Toolkit ViewModelLocator class that implements Microsoft // Unity 2.0 Inversion of Control container to resolve ViewModel dependencies. using Microsoft.Practices.Unity; namespace MVVMLightUnityExample { public class ViewModelLocator { public static UnityContainer Container { get; set; } #region Constructors static ViewModelLocator() { if (Container == null) { Container = new UnityContainer(); // register all dependencies required by view models Container .RegisterType<IDialogService, ModalDialogService>(new ContainerControlledLifetimeManager()) .RegisterType<ILoggerService, LogFileService>(new ContainerControlledLifetimeManager()) ; } } /// <summary> /// Initializes a new instance of the ViewModelLocator class. /// </summary> public ViewModelLocator() { ////if (ViewModelBase.IsInDesignModeStatic) ////{ //// // Create design time view models ////} ////else ////{ //// // Create run time view models ////} CreateMain(); } #endregion #region MainViewModel private static MainViewModel _main; /// <summary> /// Gets the Main property. /// </summary> public static MainViewModel MainStatic { get { if (_main == null) { CreateMain(); } return _main; } } /// <summary> /// Gets the Main property. /// </summary> [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Performance", "CA1822:MarkMembersAsStatic", Justification = "This non-static member is needed for data binding purposes.")] public MainViewModel Main { get { return MainStatic; } } /// <summary> /// Provides a deterministic way to delete the Main property. /// </summary> public static void ClearMain() { _main.Cleanup(); _main = null; } /// <summary> /// Provides a deterministic way to create the Main property. /// </summary> public static void CreateMain() { if (_main == null) { // allow Unity to resolve the view model and hold onto reference _main = Container.Resolve<MainViewModel>(); } } #endregion #region OrderViewModel // property to hold the order number (injected into OrderViewModel() constructor when resolved) public static string OrderToView { get; set; } /// <summary> /// Gets the OrderViewModel property. /// </summary> public static OrderViewModel OrderViewModelStatic { get { // allow Unity to resolve the view model // do not keep local reference to the instance resolved because we need a new instance // each time - the corresponding View is a UserControl that can be used multiple times // within a single window/view // pass current value of OrderToView parameter to constructor! return Container.Resolve<OrderViewModel>(new ParameterOverride("orderNumber", OrderToView)); } } /// <summary> /// Gets the OrderViewModel property. /// </summary> [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Performance", "CA1822:MarkMembersAsStatic", Justification = "This non-static member is needed for data binding purposes.")] public OrderViewModel Order { get { return OrderViewModelStatic; } } #endregion /// <summary> /// Cleans up all the resources. /// </summary> public static void Cleanup() { ClearMain(); Container = null; } } } And the MainViewModel class showing dependency injection usage: using GalaSoft.MvvmLight; using Microsoft.Practices.Unity; namespace MVVMLightUnityExample { public class MainViewModel : ViewModelBase { private IDialogService _dialogs; private ILoggerService _logger; /// <summary> /// Initializes a new instance of the MainViewModel class. This default constructor calls the /// non-default constructor resolving the interfaces used by this view model. /// </summary> public MainViewModel() : this(ViewModelLocator.Container.Resolve<IDialogService>(), ViewModelLocator.Container.Resolve<ILoggerService>()) { if (IsInDesignMode) { // Code runs in Blend --> create design time data. } else { // Code runs "for real" } } /// <summary> /// Initializes a new instance of the MainViewModel class. /// Interfaces are automatically resolved by the IoC container. /// </summary> /// <param name="dialogs">Interface to dialog service</param> /// <param name="logger">Interface to logger service</param> public MainViewModel(IDialogService dialogs, ILoggerService logger) { _dialogs = dialogs; _logger = logger; if (IsInDesignMode) { // Code runs in Blend --> create design time data. _dialogs.ShowMessage("Running in design-time mode!", "Injection Constructor", DialogButton.OK, DialogImage.Information); _logger.WriteLine("Running in design-time mode!"); } else { // Code runs "for real" _dialogs.ShowMessage("Running in run-time mode!", "Injection Constructor", DialogButton.OK, DialogImage.Information); _logger.WriteLine("Running in run-time mode!"); } } public override void Cleanup() { // Clean up if needed _dialogs = null; _logger = null; base.Cleanup(); } } } And the OrderViewModel class: using GalaSoft.MvvmLight; using Microsoft.Practices.Unity; namespace MVVMLightUnityExample { /// <summary> /// This class contains properties that a View can data bind to. /// <para> /// Use the <strong>mvvminpc</strong> snippet to add bindable properties to this ViewModel. /// </para> /// <para> /// You can also use Blend to data bind with the tool's support. /// </para> /// <para> /// See http://www.galasoft.ch/mvvm/getstarted /// </para> /// </summary> public class OrderViewModel : ViewModelBase { private const string testOrderNumber = "123456"; private Order _order; /// <summary> /// Initializes a new instance of the OrderViewModel class. /// </summary> public OrderViewModel() : this(testOrderNumber) { } /// <summary> /// Initializes a new instance of the OrderViewModel class. /// </summary> public OrderViewModel(string orderNumber) { if (IsInDesignMode) { // Code runs in Blend --> create design time data. _order = new Order(orderNumber, "My Company", "Our Address"); } else { _order = GetOrder(orderNumber); } } public override void Cleanup() { // Clean own resources if needed _order = null; base.Cleanup(); } } } And the code that could be used to display an order view for a specific order: public void ShowOrder(string orderNumber) { // pass the order number to show to ViewModelLocator to be injected //into the constructor of the OrderViewModel instance ViewModelLocator.OrderToShow = orderNumber; View.OrderView orderView = new View.OrderView(); } These examples have been stripped down to show only the IoC ideas. It took a lot of trial and error, searching the internet for examples, and finding out that the Unity 2.0 documentation is lacking (at best) to come up with this solution. Let me know if you think it could be improved.

    Read the article

  • C# ASP.NET AJAX CascadingDropDown Selected value propriety problem

    - by Eyla
    Greetings, I have a problem to use selected value propriety of CascadingDropDown. I have 3 asp dropdown controls with ajax CascadingDropDown for each one of them. I have no problem to bind data to the 3 CascadingDropDown but my problem is to rebind CascadingDropDown. simply what I want to do is to select a record from Gridview which has the selected values for the CascadingDropDown that I want to pass then rebind the CascadingDropDown with selected value. I'm posting my code down which include: 1-ASP.NET code. 2-Code behind to handle selected record from grid view. 3- web servisice that handle binding data to the 3 CascadingDropDown. please advice how to rebind data to CascadingDropDown with selected value. by the way I used selected value proprety as showning in my code but it is not working and there is no error. Thank you, ........................ ASP.NET code ........................ <%@ Page Title="" Language="C#" MasterPageFile="~/Master.Master" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="IMAM_APPLICATION.WebForm1" %> <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="cc1" %> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="idcontact_info" DataSourceID="ObjectDataSource1" onselectedindexchanged="GridView1_SelectedIndexChanged"> <Columns> <asp:CommandField ShowSelectButton="True" /> <asp:BoundField DataField="idcontact_info" HeaderText="idcontact_info" InsertVisible="False" ReadOnly="True" SortExpression="idcontact_info" /> <asp:BoundField DataField="Work_Field" HeaderText="Work_Field" SortExpression="Work_Field" /> <asp:BoundField DataField="Occupation" HeaderText="Occupation" SortExpression="Occupation" /> <asp:BoundField DataField="sub_Occupation" HeaderText="sub_Occupation" SortExpression="sub_Occupation" /> </Columns> </asp:GridView> <asp:Label ID="lbl" runat="server" Text="Label"></asp:Label> <asp:ObjectDataSource ID="ObjectDataSource1" runat="server" DeleteMethod="Delete" InsertMethod="Insert" OldValuesParameterFormatString="original_{0}" SelectMethod="GetData" TypeName="IMAM_APPLICATION.DSContactTableAdapters.contact_infoTableAdapter" UpdateMethod="Update"> <DeleteParameters> <asp:Parameter Name="Original_idcontact_info" Type="Int32" /> </DeleteParameters> <UpdateParameters> <asp:Parameter Name="Work_Field" Type="String" /> <asp:Parameter Name="Occupation" Type="String" /> <asp:Parameter Name="sub_Occupation" Type="String" /> <asp:Parameter Name="Original_idcontact_info" Type="Int32" /> </UpdateParameters> <InsertParameters> <asp:Parameter Name="Work_Field" Type="String" /> <asp:Parameter Name="Occupation" Type="String" /> <asp:Parameter Name="sub_Occupation" Type="String" /> </InsertParameters> </asp:ObjectDataSource> <asp:DropDownList ID="cmbWorkField" runat="server" Style="top: 715px; left: 180px; position: absolute; height: 22px; width: 126px"> </asp:DropDownList> <asp:DropDownList runat="server" ID="cmbOccupation" Style="top: 745px; left: 180px; position: absolute; height: 22px; width: 77px"> </asp:DropDownList> <asp:DropDownList ID="cmbSubOccup" runat="server" style="position:absolute; top: 775px; left: 180px;"> </asp:DropDownList> <cc1:CascadingDropDown ID="cmbWorkField_CascadingDropDown" runat="server" TargetControlID="cmbWorkField" Category="WorkField" LoadingText="Please Wait ..." PromptText="Select Wor kField ..." ServiceMethod="GetWorkField" ServicePath="ServiceTags.asmx"> </cc1:CascadingDropDown> <cc1:CascadingDropDown ID="cmbOccupation_CascadingDropDown" runat="server" TargetControlID="cmbOccupation" Category="Occup" LoadingText="Please wait..." PromptText="Select Occup ..." ServiceMethod="GetOccup" ServicePath="ServiceTags.asmx" ParentControlID="cmbWorkField"> </cc1:CascadingDropDown> <cc1:CascadingDropDown ID="cmbSubOccup_CascadingDropDown" runat="server" Category="SubOccup" Enabled="True" LoadingText="Please Wait..." ParentControlID="cmbOccupation" PromptText="Select Sub Occup" ServiceMethod="GetSubOccup" ServicePath="ServiceTags.asmx" TargetControlID="cmbSubOccup"> </cc1:CascadingDropDown> </asp:Content> ...................................................... C# code behind ...................................................... protected void GridView1_SelectedIndexChanged(object sender, EventArgs e) { string strg = GridView1.SelectedDataKey["idcontact_info"].ToString(); int index = Convert.ToInt32(GridView1.SelectedDataKey["idcontact_info"].ToString()); //txtSearch.Text = GridView1.SelectedIndex.ToString(); // txtSearch.Text = GridView1.SelectedDataKey["idcontact_info"].ToString(); DSContactTableAdapters.contact_infoTableAdapter GetByIDAdapter = new DSContactTableAdapters.contact_infoTableAdapter(); DSContact.contact_infoDataTable ByID = GetByIDAdapter.GetDataByID(index); //DSSearch.contact_infoDataTable FirstName = FirstNameAdapter.GetDataByFirstNameList(prefixText); foreach (DataRow dr in ByID.Rows) { lbl.Text = dr["Work_Field"].ToString() + "....." + dr["Occupation"].ToString() + "....." + dr["sub_Occupation"].ToString(); cmbWorkField_CascadingDropDown.SelectedValue = dr["Work_Field"].ToString(); cmbOccupation_CascadingDropDown.SelectedValue = dr["Occupation"].ToString(); cmbSubOccup_CascadingDropDown.SelectedValue = dr["sub_Occupation"].ToString(); } } ....................................................... web Service ....................................................... [WebMethod] public CascadingDropDownNameValue[] GetWorkField(string knownCategoryValues, string category) { //dsCarsTableAdapters.CarsTableAdapter makeAdapter = new dsCarsTableAdapters.CarsTableAdapter(); //dsCars.CarsDataTable makes = makeAdapter.GetAllCars(); DSContactTableAdapters.tag_work_fieldTableAdapter GetWorkFieldAdapter = new DSContactTableAdapters.tag_work_fieldTableAdapter(); DSContact.tag_work_fieldDataTable WorkFields = GetWorkFieldAdapter.GetDataByGetWorkField(); List<CascadingDropDownNameValue> values = new List<CascadingDropDownNameValue>(); foreach (DataRow dr in WorkFields) { string Work_Field = (string)dr["work_Field_name"]; int idtag_work_field = (int)dr["idtag_work_field"]; values.Add(new CascadingDropDownNameValue(Work_Field, idtag_work_field.ToString())); } return values.ToArray(); } [WebMethod] public CascadingDropDownNameValue[] GetOccup(string knownCategoryValues, string category) { StringDictionary kv = CascadingDropDown.ParseKnownCategoryValuesString(knownCategoryValues); int idtag_work_field; if (!kv.ContainsKey("WorkField") || !Int32.TryParse(kv["WorkField"], out idtag_work_field)) { return null; } //dsCarModelsTableAdapters.CarModelsTableAdapter modelAdapter = new dsCarModelsTableAdapters.CarModelsTableAdapter(); //dsCarModels.CarModelsDataTable models = modelAdapter.GetModelsByCarId(makeId); DSContactTableAdapters.tag_OccupTableAdapter GetOccupAdapter = new DSContactTableAdapters.tag_OccupTableAdapter(); DSContact.tag_OccupDataTable Occups = GetOccupAdapter.GetByOccup_ID(idtag_work_field); // List<CascadingDropDownNameValue> values = new List<CascadingDropDownNameValue>(); foreach (DataRow dr in Occups) { values.Add(new CascadingDropDownNameValue((string)dr["Occup_Name"], dr["idtag_Occup"].ToString())); } return values.ToArray(); } [WebMethod] public CascadingDropDownNameValue[] GetSubOccup(string knownCategoryValues, string category) { StringDictionary kv = CascadingDropDown.ParseKnownCategoryValuesString(knownCategoryValues); int idtag_Occup; if (!kv.ContainsKey("Occup") || !Int32.TryParse(kv["Occup"], out idtag_Occup)) { return null; } //dsModelColorsTableAdapters.ModelColorsTableAdapter adapter = new dsModelColorsTableAdapters.ModelColorsTableAdapter(); //dsModelColors.ModelColorsDataTable colors = adapter.GetColorsByModelId(colorId); DSContactTableAdapters.tag_Sub_OccupTableAdapter GetSubOccupAdapter = new DSContactTableAdapters.tag_Sub_OccupTableAdapter(); DSContact.tag_Sub_OccupDataTable SubOccups = GetSubOccupAdapter.GetDataBy_Sub_Occup_ID(idtag_Occup); List<CascadingDropDownNameValue> values = new List<CascadingDropDownNameValue>(); foreach (DataRow dr in SubOccups) { values.Add(new CascadingDropDownNameValue((string)dr["Sub_Occup_Name"], dr["idtag_Sub_Occup"].ToString())); } return values.ToArray(); }

    Read the article

  • Dependency Property WPF Grid

    - by developer
    Hi All, I want to Bind the textblock text in WPF datagrid to a dependency property. Somehow, nothing gets displayed, but when I use the same textblock binding outside the grid, everything works fine. Below is my code, <Window.Resources> <Style x:Key="cellCenterAlign" TargetType="{x:Type toolkit:DataGridCell}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type toolkit:DataGridCell}"> <Grid Background="{TemplateBinding Background}"> <ContentPresenter HorizontalAlignment="Center" VerticalAlignment="Center"/> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> <Style x:Key="ColumnHeaderStyle" TargetType="{x:Type toolkit:DataGridColumnHeader}"> <Setter Property="VerticalContentAlignment" Value="Center" /> <Setter Property="HorizontalContentAlignment" Value="Center"/> </Style> <ObjectDataProvider MethodName="GetValues" ObjectType="{x:Type sys:Enum}" x:Key="RoleValues"> <ObjectDataProvider.MethodParameters> <x:Type TypeName="domain:SubscriptionRole"/> </ObjectDataProvider.MethodParameters> </ObjectDataProvider> <DataTemplate x:Key="myTemplate"> <StackPanel> <TextBlock Text="{Binding Path=OtherSubs}"/> </StackPanel> </DataTemplate> </Window.Resources> <Grid> <Grid.RowDefinitions> <RowDefinition Height="220"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <StackPanel Grid.Row="0"> <toolkit:DataGrid Name="definitionGrid" Margin="0,10,0,0" AutoGenerateColumns="False" CanUserAddRows="False" CanUserDeleteRows="False" IsReadOnly="False" RowHeight="25" FontWeight="Normal" ItemsSource="{Binding programSubscription}" ColumnHeaderStyle="{DynamicResource ColumnHeaderStyle}" SelectionMode="Single" ScrollViewer.HorizontalScrollBarVisibility="Disabled" Width="450" ScrollViewer.VerticalScrollBarVisibility="Auto" Height="200"> <toolkit:DataGrid.Columns> <toolkit:DataGridTextColumn Header="Program" Width="80" Binding="{Binding Program.JobNum}" CellStyle="{StaticResource cellCenterAlign}" IsReadOnly="True"/> <toolkit:DataGridTemplateColumn Header="Role" Width="80" CellStyle="{StaticResource cellCenterAlign}"> <toolkit:DataGridTemplateColumn.CellTemplate> <DataTemplate> <ComboBox SelectedItem="{Binding Role}" ItemsSource="{Binding Source={StaticResource RoleValues}}" Width="70"> <ComboBox.Style> <Style> <Style.Triggers> <DataTrigger Binding="{Binding Path=Role}" Value="Owner"> <Setter Property="ComboBox.Focusable" Value="False"/> <Setter Property="ComboBox.IsEnabled" Value="False"/> <Setter Property="ComboBox.IsHitTestVisible" Value="False"/> </DataTrigger> </Style.Triggers> </Style> </ComboBox.Style> </ComboBox> </DataTemplate> </toolkit:DataGridTemplateColumn.CellTemplate> </toolkit:DataGridTemplateColumn> <toolkit:DataGridCheckBoxColumn Header="Email" Width="60" Binding="{Binding ReceivesEmail}" CellStyle="{StaticResource cellCenterAlign}"/> <!--<toolkit:DataGridTextColumn Header="Others" Width="220" Binding="{Binding programSubscription1.Subscriber.Username}" CellStyle="{StaticResource cellCenterAlign}" IsReadOnly="True"/>--> <toolkit:DataGridTemplateColumn Header="Others" Width="220" CellStyle="{StaticResource cellCenterAlign}" IsReadOnly="True"> <toolkit:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Path=OtherSubs}"/> </DataTemplate> </toolkit:DataGridTemplateColumn.CellTemplate> </toolkit:DataGridTemplateColumn> </toolkit:DataGrid.Columns> </toolkit:DataGrid> <TextBlock Text="{Binding Path=OtherSubs}"/> </StackPanel> <Grid Grid.Row="1"> <Grid.ColumnDefinitions> <ColumnDefinition Width="200"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <StackPanel Grid.Column="0" HorizontalAlignment="Center" VerticalAlignment="Center"> <CheckBox Content="Show Only Active Programs" IsChecked="True" Margin="0,0,8,0"/> </StackPanel> <StackPanel Orientation="Horizontal" VerticalAlignment="Center" Grid.Column="1" HorizontalAlignment="Right"> <Button Content="Save" Height="23" Width="75" Margin="0,0,8,0" Click="Save_Click"/> <Button Content="Cancel" Height="23" Width="75" Margin="0,0,8,0" Click="Cancel_Click" /> </StackPanel> </Grid> </Grid> Code-Behind public partial class ProgramSubscriptions : Window { public static ObservableCollection programSubscription { get; set; } public string OtherSubs { get { return (string)GetValue(OtherSubsProperty); } set { SetValue(OtherSubsProperty, value); } } public static readonly DependencyProperty OtherSubsProperty = DependencyProperty.Register("OtherSubs", typeof(string), typeof(ProgramSubscriptions), new UIPropertyMetadata(string.Empty)); private string CurrentUsername = "test"; public ProgramSubscriptions() { InitializeComponent(); DataContext = this; LoadData(); } protected void LoadData() { programSubscription = new ObservableCollection<ProgramSubscriptionViewModel>(); if (res != null && res.TotalResults > 0) { List<ProgramSubscriptionViewModel> UserPrgList = new List<ProgramSubscriptionViewModel>(); //other.... List<ProgramSubscriptionViewModel> OtherPrgList = new List<ProgramSubscriptionViewModel>(); ArrayList myList = new ArrayList(); foreach (DomainObject obj in res.ResultSet) { ProgramSubscription prg = (ProgramSubscription)obj; if (prg.Subscriber.Username == CurrentUsername) { UserPrgList.Add(new ProgramSubscriptionViewModel(prg)); myList.Add(prg.Program.ID); } else OtherPrgList.Add(new ProgramSubscriptionViewModel(prg)); } for (int i = 0; i < UserPrgList.Count; i++) { ProgramSubscriptionViewModel item = UserPrgList[i]; programSubscription.Add(item); } //other.... for (int i = 0; i < OtherPrgList.Count; i++) { foreach (int y in myList) { ProgramSubscriptionViewModel otheritem = OtherPrgList[i]; if (y == otheritem.Program.ID) OtherSubs += otheritem.Subscriber.Username + ", "; } } } } } I posted the entire code. What exactly I want to do is in the datagridtemplatecolumn for others I want to display the usernames that are not in CurrentUsername, but they have the same program Id as the CurrentUsername. Please do let me know if there is another way that i can make this work, instead of using a dependencyproperty, althouht for testing I did put a textblock below datagrid, and it works perfectly fine.. Help!

    Read the article

  • Bindable richTextBox still hanging in memory {WPF, Caliburn.Micro}

    - by Paul
    Hi, I use in WFP Caliburn.Micro Framework. I need bindable richTextbox for Document property. I found many ways how do it bindable richTextBox. But I have one problem. From parent window I open child window. Child window consist bindable richTextBox user control. After I close child window and use memory profiler view class with bindabelrichTextBox control and view model class is still hanging in memory. - this cause memory leaks. If I use richTextBox from .NET Framework or richTextBox from Extended WPF Toolkit it doesn’t cause this memory leak problem. I can’t identified problem in bindable richTextBox class. Here is ist class for bindable richTextBox: Base class can be from .NET or Extended toolkit. /// <summary> /// Represents a bindable rich editing control which operates on System.Windows.Documents.FlowDocument /// objects. /// </summary> public class BindableRichTextBox : RichTextBox { /// <summary> /// Identifies the <see cref="Document"/> dependency property. /// </summary> public static readonly DependencyProperty DocumentProperty = DependencyProperty.Register("Document", typeof(FlowDocument), typeof(BindableRichTextBox)); /// <summary> /// Initializes a new instance of the <see cref="BindableRichTextBox"/> class. /// </summary> public BindableRichTextBox() : base() { } /// <summary> /// Initializes a new instance of the <see cref="BindableRichTextBox"/> class. /// </summary> /// <param title="document">A <see cref="T:System.Windows.Documents.FlowDocument"></see> to be added as the initial contents of the new <see cref="T:System.Windows.Controls.BindableRichTextBox"></see>.</param> public BindableRichTextBox(FlowDocument document) : base(document) { } /// <summary> /// Raises the <see cref="E:System.Windows.FrameworkElement.Initialized"></see> event. This method is invoked whenever <see cref="P:System.Windows.FrameworkElement.IsInitialized"></see> is set to true internally. /// </summary> /// <param title="e">The <see cref="T:System.Windows.RoutedEventArgs"></see> that contains the event data.</param> protected override void OnInitialized(EventArgs e) { // Hook up to get notified when DocumentProperty changes. DependencyPropertyDescriptor descriptor = DependencyPropertyDescriptor.FromProperty(DocumentProperty, typeof(BindableRichTextBox)); descriptor.AddValueChanged(this, delegate { // If the underlying value of the dependency property changes, // update the underlying document, also. base.Document = (FlowDocument)GetValue(DocumentProperty); }); // By default, we support updates to the source when focus is lost (or, if the LostFocus // trigger is specified explicity. We don't support the PropertyChanged trigger right now. this.LostFocus += new RoutedEventHandler(BindableRichTextBox_LostFocus); base.OnInitialized(e); } /// <summary> /// Handles the LostFocus event of the BindableRichTextBox control. /// </summary> /// <param title="sender">The source of the event.</param> /// <param title="e">The <see cref="System.Windows.RoutedEventArgs"/> instance containing the event data.</param> void BindableRichTextBox_LostFocus(object sender, RoutedEventArgs e) { // If we have a binding that is set for LostFocus or Default (which we are specifying as default) // then update the source. Binding binding = BindingOperations.GetBinding(this, DocumentProperty); if (binding.UpdateSourceTrigger == UpdateSourceTrigger.Default || binding.UpdateSourceTrigger == UpdateSourceTrigger.LostFocus) { BindingOperations.GetBindingExpression(this, DocumentProperty).UpdateSource(); } } /// <summary> /// Gets or sets the <see cref="T:System.Windows.Documents.FlowDocument"></see> that represents the contents of the <see cref="T:System.Windows.Controls.BindableRichTextBox"></see>. /// </summary> /// <value></value> /// <returns>A <see cref="T:System.Windows.Documents.FlowDocument"></see> object that represents the contents of the <see cref="T:System.Windows.Controls.BindableRichTextBox"></see>.By default, this property is set to an empty <see cref="T:System.Windows.Documents.FlowDocument"></see>. Specifically, the empty <see cref="T:System.Windows.Documents.FlowDocument"></see> contains a single <see cref="T:System.Windows.Documents.Paragraph"></see>, which contains a single <see cref="T:System.Windows.Documents.Run"></see> which contains no text.</returns> /// <exception cref="T:System.ArgumentException">Raised if an attempt is made to set this property to a <see cref="T:System.Windows.Documents.FlowDocument"></see> that represents the contents of another <see cref="T:System.Windows.Controls.RichTextBox"></see>.</exception> /// <exception cref="T:System.ArgumentNullException">Raised if an attempt is made to set this property to null.</exception> /// <exception cref="T:System.InvalidOperationException">Raised if this property is set while a change block has been activated.</exception> public new FlowDocument Document { get { return (FlowDocument)GetValue(DocumentProperty); } set { SetValue(DocumentProperty, value); } } } Thank fro help and advice. Qucik example: Child window with .NET richTextBox <Window x:Class="WpfApplication2.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Grid> <RichTextBox Background="Green" VerticalScrollBarVisibility="Auto" HorizontalScrollBarVisibility="Auto" FontSize="13" Margin="4,4,4,4" Grid.Row="0"/> </Grid> </Window> This window I open from parent window: var w = new Window1(); w.Show(); Then close this window, check with memory profiler and it memory doesn’t exist any object of window1 - richTextBox. It’s Ok. But then I try bindable richTextBox: Child window 2: <Window x:Class="WpfApplication2.Window2" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Controls="clr-namespace:WpfApplication2.Controls" Title="Window2" Height="300" Width="300"> <Grid> <Controls:BindableRichTextBox Background="Red" VerticalScrollBarVisibility="Auto" HorizontalScrollBarVisibility="Auto" FontSize="13" Margin="4,4,4,4" Grid.Row="0" /> </Grid> </Window> Open child window 2, close this child window and in memory are still alive object of this child window also bindable richTextBox object.

    Read the article

  • validation in datagrid while insert update in asp.net

    - by abhi
    <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default7.aspx.cs" Inherits="Default7" % <%@ Register Assembly="Telerik.Web.UI" Namespace="Telerik.Web.UI" TagPrefix="telerik" % Untitled Page   <telerik:GridTemplateColumn Visible="false"> <ItemTemplate> <asp:Label ID="lblEmpID" runat="server" Text='<%# bind("pid") %>'> </asp:Label> </ItemTemplate> </telerik:GridTemplateColumn> <telerik:GridBoundColumn UniqueName="Fname" HeaderText="Fname" DataField="Fname" CurrentFilterFunction="NotIsNull" SortExpression="Fname"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="Lname" HeaderText="Lname" DataField="Lname" CurrentFilterFunction="NotIsNull" SortExpression="Lname"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="Designation" HeaderText="Designation" DataField="Designation" CurrentFilterFunction="NotIsNull" SortExpression="Designation"> </telerik:GridBoundColumn> <telerik:GridEditCommandColumn> </telerik:GridEditCommandColumn> <telerik:GridButtonColumn CommandName="Delete" Text="Delete" UniqueName="column"> </telerik:GridButtonColumn> </Columns> <EditFormSettings> <FormTemplate> <table> <tr> <td>Fname*</td> <td> <asp:HiddenField ID="Fname" runat="server" Visible="false" /> <asp:TextBox ID="txtFname" runat="server" Text='<%#("Fname")%>'> </asp:TextBox> <asp:RequiredFieldValidator ID="EvalFname" ControlToValidate="txtFname" ErrorMessage="Enter Name" runat="server" ValidationGroup="Update"> *</asp:RequiredFieldValidator> </td> </tr> <tr> <td>Lname*</td> <td> <asp:HiddenField ID="HiddenField1" runat="server" Visible="false" /> <asp:TextBox ID="txtLname" runat="server" Text='<%#("Lname")%>'> </asp:TextBox> <asp:RequiredFieldValidator ID="RequiredFieldValidator1" ControlToValidate="txtLname" ErrorMessage="Enter Name" runat="server" ValidationGroup="Update"> *</asp:RequiredFieldValidator> </td> </tr> <tr> <td>Designation* </td> <td> <asp:HiddenField ID="HiddenField2" runat="server" Visible="false" /> <asp:TextBox ID="txtDesignation" runat="server" Text='<%#("Designation")%>'> </asp:TextBox> <asp:RequiredFieldValidator ID="RequiredFieldValidator2" ControlToValidate="txtDesignation" ErrorMessage="Enter Designation" runat="server" ValidationGroup="Update"> *</asp:RequiredFieldValidator> </td> </tr> </table> </FormTemplate> <EditColumn UpdateText="Update record" UniqueName="EditCommandColumn1" CancelText="Cancel edit"> </EditColumn> </EditFormSettings> </MasterTableView> </telerik:RadGrid> </form> this is my code i want to perform validation using the required validators but i think m missing smthin so pls help, here's my code behind using System; using System.Data; using System.Configuration; using System.Collections; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.Data.SqlClient; using Telerik.Web.UI; public partial class Default7 : System.Web.UI.Page { string strQry; public SqlDataAdapter da; public DataSet ds; public SqlCommand cmd; public DataTable dt; string strCon = "Data Source=MINETDEVDATA; Initial Catalog=ML_SuppliersProd; User Id=sa; Password=@MinetApps7;"; public SqlConnection con; protected void Page_Load(object sender, EventArgs e) { } protected void RadGrid1_NeedDataSource(object source, Telerik.Web.UI.GridNeedDataSourceEventArgs e) { dt = new DataTable(); con = new SqlConnection(strCon); da = new SqlDataAdapter(); try { strQry = "SELECT * FROM table2"; con.Open(); da.SelectCommand = new SqlCommand(strQry,con); da.Fill(dt); RadGrid1.DataSource = dt; } finally { con.Close(); } } protected void RadGrid1_DeleteCommand(object source, Telerik.Web.UI.GridCommandEventArgs e) { con = new SqlConnection(strCon); cmd = new SqlCommand(); GridDataItem item = (GridDataItem)e.Item; string pid = item.OwnerTableView.DataKeyValues[item.ItemIndex]["pid"].ToString(); con.Open(); string delete = "DELETE from table2 where pid='"+pid+"'"; cmd.CommandText = delete; cmd.Connection = con; cmd.ExecuteNonQuery(); con.Close(); } protected void RadGrid1_UpdateCommand(object source, GridCommandEventArgs e) { GridEditableItem radgriditem = e.Item as GridEditableItem; string pid = radgriditem.OwnerTableView.DataKeyValues[radgriditem.ItemIndex]["pid"].ToString(); string firstname = (radgriditem["Fname"].Controls[0] as TextBox).Text; string lastname = (radgriditem["Lname"].Controls[0] as TextBox).Text; string designation = (radgriditem["Designation"].Controls[0] as TextBox).Text; con = new SqlConnection(strCon); cmd = new SqlCommand(); try { con.Open(); string update = "UPDATE table2 set Fname='" + firstname + "',Lname='" + lastname + "',Designation='" + designation + "' WHERE pid='" + pid + "'"; cmd.CommandText = update; cmd.Connection = con; cmd.ExecuteNonQuery(); con.Close(); } catch (Exception ex) { RadGrid1.Controls.Add(new LiteralControl("Unable to update Reason: " + ex.Message)); e.Canceled = true; } } protected void RadGrid1_InsertCommand(object source, GridCommandEventArgs e) { GridEditFormInsertItem insertitem = (GridEditFormInsertItem)e.Item; string firstname = (insertitem["Fname"].Controls[0] as TextBox).Text; string lastname = (insertitem["Lname"].Controls[0] as TextBox).Text; string designation = (insertitem["Designation"].Controls[0] as TextBox).Text; con = new SqlConnection(strCon); cmd = new SqlCommand(); try { con.Open(); String insertQuery = "INSERT into table1(Fname,Lname,Designation) Values ('" + firstname + "','" + lastname + "','" + designation + "')"; cmd.CommandText = insertQuery; cmd.Connection = con; cmd.ExecuteNonQuery(); con.Close(); } catch(Exception ex) { RadGrid1.Controls.Add(new LiteralControl("Unable to insert Reason:" + ex.Message)); e.Canceled = true; } } }

    Read the article

  • Ninject.ActivationException: Error activating IMainLicense

    - by Stefan Karlsson
    Im don't know fully how Ninject works thats wye i ask this question here to figure out whats wrong. If i create a empty constructor in ClaimsSecurityService it gets hit. This is my error: Error activating IMainLicense No matching bindings are available, and the type is not self-bindable. Activation path: 3) Injection of dependency IMainLicense into parameter mainLicenses of constructor of type ClaimsSecurityService 2) Injection of dependency ISecurityService into parameter securityService of constructor of type AccountController 1) Request for AccountController Stack: Ninject.KernelBase.Resolve(IRequest request) +474 Ninject.Planning.Targets.Target`1.GetValue(Type service, IContext parent) +153 Ninject.Planning.Targets.Target`1.ResolveWithin(IContext parent) +747 Ninject.Activation.Providers.StandardProvider.GetValue(IContext context, ITarget target) +269 Ninject.Activation.Providers.<>c__DisplayClass4.<Create>b__2(ITarget target) +69 System.Linq.WhereSelectArrayIterator`2.MoveNext() +66 System.Linq.Buffer`1..ctor(IEnumerable`1 source) +216 System.Linq.Enumerable.ToArray(IEnumerable`1 source) +77 Ninject.Activation.Providers.StandardProvider.Create(IContext context) +847 Ninject.Activation.Context.ResolveInternal(Object scope) +218 Ninject.Activation.Context.Resolve() +277 Ninject.<>c__DisplayClass15.<Resolve>b__f(IBinding binding) +86 System.Linq.WhereSelectEnumerableIterator`2.MoveNext() +145 System.Linq.Enumerable.SingleOrDefault(IEnumerable`1 source) +4059897 Ninject.Planning.Targets.Target`1.GetValue(Type service, IContext parent) +169 Ninject.Planning.Targets.Target`1.ResolveWithin(IContext parent) +747 Ninject.Activation.Providers.StandardProvider.GetValue(IContext context, ITarget target) +269 Ninject.Activation.Providers.<>c__DisplayClass4.<Create>b__2(ITarget target) +69 System.Linq.WhereSelectArrayIterator`2.MoveNext() +66 System.Linq.Buffer`1..ctor(IEnumerable`1 source) +216 System.Linq.Enumerable.ToArray(IEnumerable`1 source) +77 Ninject.Activation.Providers.StandardProvider.Create(IContext context) +847 Ninject.Activation.Context.ResolveInternal(Object scope) +218 Ninject.Activation.Context.Resolve() +277 Ninject.<>c__DisplayClass15.<Resolve>b__f(IBinding binding) +86 System.Linq.WhereSelectEnumerableIterator`2.MoveNext() +145 System.Linq.Enumerable.SingleOrDefault(IEnumerable`1 source) +4059897 Ninject.Web.Mvc.NinjectDependencyResolver.GetService(Type serviceType) +145 System.Web.Mvc.DefaultControllerActivator.Create(RequestContext requestContext, Type controllerType) +87 [InvalidOperationException: An error occurred when trying to create a controller of type 'Successful.Struct.Web.Controllers.AccountController'. Make sure that the controller has a parameterless public constructor.] System.Web.Mvc.DefaultControllerActivator.Create(RequestContext requestContext, Type controllerType) +247 System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +438 System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) +257 System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, IController& controller, IControllerFactory& factory) +326 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state) +157 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContext httpContext, AsyncCallback callback, Object state) +88 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData) +50 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +301 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 Account controller: public class AccountController : Controller { private readonly ISecurityService _securityService; public AccountController(ISecurityService securityService) { _securityService = securityService; } // // GET: /Account/Login [AllowAnonymous] public ActionResult Login(string returnUrl) { ViewBag.ReturnUrl = returnUrl; return View(); } } NinjectWebCommon: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Http; using System.Web.Http.Dependencies; using Microsoft.Web.Infrastructure.DynamicModuleHelper; using Ninject; using Ninject.Extensions.Conventions; using Ninject.Parameters; using Ninject.Syntax; using Ninject.Web.Common; using Successful.Struct.Web; [assembly: WebActivator.PreApplicationStartMethod(typeof(NinjectWebCommon), "Start")] [assembly: WebActivator.ApplicationShutdownMethodAttribute(typeof(NinjectWebCommon), "Stop")] namespace Successful.Struct.Web { public static class NinjectWebCommon { private static readonly Bootstrapper Bootstrapper = new Bootstrapper(); /// <summary> /// Starts the application /// </summary> public static void Start() { DynamicModuleUtility.RegisterModule(typeof(OnePerRequestHttpModule)); DynamicModuleUtility.RegisterModule(typeof(NinjectHttpModule)); Bootstrapper.Initialize(CreateKernel); } /// <summary> /// Stops the application. /// </summary> public static void Stop() { Bootstrapper.ShutDown(); } /// <summary> /// Creates the kernel that will manage your application. /// </summary> /// <returns>The created kernel.</returns> private static IKernel CreateKernel() { var kernel = new StandardKernel(); kernel.Bind<Func<IKernel>>().ToMethod(ctx => () => new Bootstrapper().Kernel); kernel.Bind<IHttpModule>().To<HttpApplicationInitializationHttpModule>(); kernel.Load("Successful*.dll"); kernel.Bind(x => x.FromAssembliesMatching("Successful*.dll") .SelectAllClasses() .BindAllInterfaces() ); GlobalConfiguration.Configuration.DependencyResolver = new NinjectResolver(kernel); RegisterServices(kernel); return kernel; } /// <summary> /// Load your modules or register your services here! /// </summary> /// <param name="kernel">The kernel.</param> private static void RegisterServices(IKernel kernel) { } } public class NinjectResolver : NinjectScope, IDependencyResolver { private readonly IKernel _kernel; public NinjectResolver(IKernel kernel) : base(kernel) { _kernel = kernel; } public IDependencyScope BeginScope() { return new NinjectScope(_kernel.BeginBlock()); } } public class NinjectScope : IDependencyScope { protected IResolutionRoot ResolutionRoot; public NinjectScope(IResolutionRoot kernel) { ResolutionRoot = kernel; } public object GetService(Type serviceType) { var request = ResolutionRoot.CreateRequest(serviceType, null, new Parameter[0], true, true); return ResolutionRoot.Resolve(request).SingleOrDefault(); } public IEnumerable<object> GetServices(Type serviceType) { var request = ResolutionRoot.CreateRequest(serviceType, null, new Parameter[0], true, true); return ResolutionRoot.Resolve(request).ToList(); } public void Dispose() { var disposable = (IDisposable)ResolutionRoot; if (disposable != null) disposable.Dispose(); ResolutionRoot = null; } } } ClaimsSecurityService: public class ClaimsSecurityService : ISecurityService { private const string AscClaimsIdType = "http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider"; private const string SuccessfulStructWebNamespace = "Successful.Struct.Web"; private readonly IMainLicense _mainLicenses; private readonly ICompany _companys; private readonly IAuthTokenService _authService; [Inject] public IApplicationContext ApplicationContext { get; set; } [Inject] public ILogger<LocationService> Logger { get; set; } public ClaimsSecurityService(IMainLicense mainLicenses, ICompany companys, IAuthTokenService authService) { _mainLicenses = mainLicenses; _companys = companys; _authService = authService; } }

    Read the article

  • Problem with onRetainNonConfigurationInstance

    - by David
    I am writing a small app using the Android SDK, 1.6 target, and the Eclipse plug-in. I have layouts for both portrait and landscape mode, and most everything is working well. I say most because I am having issues with the orientation change. One part of the app has a ListView "on top of" another section. That section consists of 4 checkboxes, a button, and some TextViews. That is the portrait version. The landscape version replaces the ListView with a Spinner and rearranges some of the other components (but leaves the ALL resource ids the same). While in either orientation things work like they should. It's when the app switches orientation that things go off. Only 1 of the checkboxes maintains it's state throughout both layout changes. The other three CBs only maintain their state when going from landscape-portrait. I am also having problem getting the ListView/Spinner to correctly set themselves on changing. I am using onRetainNonConfigurationInstance() and creating a custom object that is returned. When I step through the code during a orientation change, the custom object is successfully pulled back out the the ether, and the widgets are being set to the correct values (inspecting them). But for some reason, once the onCreate is done, the checkboxes are not set to true. public class SkillSelectionActivity extends Activity { private Button rollDiceButton; private ListView skillListView; private CheckBox makeCommonCB; private CheckBox useEdgeCB; private CheckBox useSpecializationCB; private CheckBox isExtendedCB; private TextView skillNameView; private TextView skillRanksView; private TextView rollResultView; private TextView rollSuccessesView; private TextView rollFailuresView; private TextView extendedTestTotalView; private TextView extendedTestTimeView; private TextView skillSpecNameView; private int extendedTestTotal = 0; private int extendedTestTime = 0; private Skill currentSkill; private int currentPosition = 0; private SRCharacter character; private int skillSelectionType; private Spinner skillSpinnerView; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.skill_selection2); Intent intent = getIntent(); Bundle extras = intent.getExtras(); skillSelectionType = extras.getInt("SKILL_SELECTION"); skillListView = (ListView) findViewById(R.id.skillList); skillSpinnerView = (Spinner) findViewById(R.id.skillSpinner); rollDiceButton = (Button) findViewById(R.id.rollDiceButton); makeCommonCB = (CheckBox) findViewById(R.id.makeCommonCB); useEdgeCB = (CheckBox) findViewById(R.id.useEdgeCB); useSpecializationCB = (CheckBox) findViewById(R.id.useSpecializationCB); isExtendedCB = (CheckBox) findViewById(R.id.extendedTestCB); skillNameView = (TextView) findViewById(R.id.skillName); skillRanksView = (TextView) findViewById(R.id.skillRanks); rollResultView = (TextView) findViewById(R.id.rollResult); rollSuccessesView = (TextView) findViewById(R.id.rollSuccesses); rollFailuresView = (TextView) findViewById(R.id.rollFailures); extendedTestTotalView = (TextView) findViewById(R.id.extendedTestTotal); extendedTestTimeView = (TextView) findViewById(R.id.extendedTestTime); skillSpecNameView = (TextView) findViewById(R.id.skillSpecName); character = ((SR4DR) getApplication()).getCharacter(); ConfigSaver data = (ConfigSaver) getLastNonConfigurationInstance(); if (data == null) { makeCommonCB.setChecked(false); useEdgeCB.setChecked(false); useSpecializationCB.setChecked(false); isExtendedCB.setChecked(false); currentSkill = null; } else { currentSkill = data.getSkill(); currentPosition = data.getPosition(); useEdgeCB.setChecked(data.isEdge()); useSpecializationCB.setChecked(data.isSpec()); isExtendedCB.setChecked(data.isExtended()); makeCommonCB.setChecked(data.isCommon()); if (skillSpinnerView != null) { skillSpinnerView.setSelection(currentPosition); } if (skillListView != null) { skillListView.setSelection(currentPosition); } } // Register handler for UI elements rollDiceButton.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { // guts removed for clarity } }); makeCommonCB.setOnCheckedChangeListener(new OnCheckedChangeListener() { public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) { // guts removed for clarity } }); isExtendedCB.setOnCheckedChangeListener(new OnCheckedChangeListener() { public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) { // guts removed for clarity } }); useEdgeCB.setOnCheckedChangeListener(new OnCheckedChangeListener() { public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) { // guts removed for clarity } }); useSpecializationCB.setOnCheckedChangeListener(new OnCheckedChangeListener() { public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) { // guts removed for clarity } }); if (skillListView != null) { skillListView.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> parent, View v, int position, long id) { // guts removed for clarity } }); } if (skillSpinnerView != null) { skillSpinnerView.setOnItemSelectedListener(new MyOnItemSelectedListener()); } populateSkillList(); } private void populateSkillList() { String[] list = character.getSkillNames(skillSelectionType); if (list == null) { list = new String[0]; } if (skillListView != null) { ArrayAdapter<String> adapter = new ArrayAdapter<String>(this, R.layout.list_item, list); skillListView.setAdapter(adapter); } if (skillSpinnerView != null) { ArrayAdapter<String> adapter = new ArrayAdapter<String>(this, android.R.layout.simple_spinner_item, list); adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); skillSpinnerView.setAdapter(adapter); } } public class MyOnItemSelectedListener implements OnItemSelectedListener { public void onItemSelected(AdapterView<?> parent, View view, int position, long id) { // guts removed for clarity } public void onNothingSelected(AdapterView<?> parent) { // Do nothing. } } @Override public Object onRetainNonConfigurationInstance() { ConfigSaver cs = new ConfigSaver(currentSkill, currentPosition, useEdgeCB.isChecked(), useSpecializationCB.isChecked(), makeCommonCB.isChecked(), isExtendedCB.isChecked()); return cs; } class ConfigSaver { private Skill skill = null; private int position = 0; private boolean edge; private boolean spec; private boolean common; private boolean extended; public ConfigSaver(Skill skill, int position, boolean useEdge, boolean useSpec, boolean isCommon, boolean isExt) { this.setSkill(skill); this.position = position; this.edge = useEdge; this.spec = useSpec; this.common = isCommon; this.extended = isExt; } // public getters and setters removed for clarity } }

    Read the article

  • A problem with the asp.net create user control

    - by Sir Psycho
    Hi, I've customised the asp.net login control and it seems to create new accounts fine, but if I duplicate the user id thats already registered or enter an email thats already used, the error messages arn't displaying. Its driving me crazy. The page just refreshes without showing an error. I've included the as instructed on the MSDN site but nothing. http://msdn.microsoft.com/en-us/library/ms178342.aspx <asp:CreateUserWizard ErrorMessageStyle-BorderColor="Azure" ID="CreateUserWizard1" runat="server" ContinueDestinationPageUrl="~/home.aspx"> <WizardSteps> <asp:CreateUserWizardStep ID="CreateUserWizardStep1" runat="server"> <ContentTemplate> <asp:Literal ID="ErrorMessage" runat="server"></asp:Literal> <div class="fieldLine"> <asp:Label ID="lblFirstName" runat="server" Text="First Name:" AssociatedControlID="tbxFirstName"></asp:Label> <asp:Label ID="lblLastName" runat="server" Text="Last Name:" AssociatedControlID="tbxLastName"></asp:Label> </div> <div class="fieldLine"> <asp:TextBox ID="tbxFirstName" runat="server"></asp:TextBox> <asp:TextBox ID="tbxLastName" runat="server"></asp:TextBox> </div> <asp:Label ID="lblEmail" runat="server" Text="Email:" AssociatedControlID="Email"></asp:Label> <asp:TextBox ID="Email" runat="server" CssClass="wideInput"></asp:TextBox><br /> <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server" CssClass="aspValidator" Display="Dynamic" ControlToValidate="Email" ErrorMessage="Required"></asp:RequiredFieldValidator> <asp:RegularExpressionValidator ID="RegularExpressionValidator1" runat="server" Display="Dynamic" CssClass="aspValidator" ControlToValidate="Email" SetFocusOnError="true" ValidationExpression="^(?:[a-zA-Z0-9_'^&amp;/+-])+(?:\.(?:[a-zA-Z0-9_'^&amp;/+-])+)*@(?:(?:\[?(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))\.){3}(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\]?)|(?:[a-zA-Z0-9-]+\.)+(?:[a-zA-Z]){2,}\.?)$" ErrorMessage="Email address not valid"></asp:RegularExpressionValidator> <asp:Label ID="lblEmailConfirm" runat="server" Text="Confirm Email Address:" AssociatedControlID="tbxEmailConfirm"></asp:Label> <asp:TextBox ID="tbxEmailConfirm" runat="server" CssClass="wideInput"></asp:TextBox><br /> <asp:RequiredFieldValidator ID="RequiredFieldValidator2" runat="server" CssClass="aspValidator" Display="Dynamic" ControlToValidate="tbxEmailConfirm" ErrorMessage="Required"></asp:RequiredFieldValidator> <asp:RegularExpressionValidator ID="RegularExpressionValidator2" runat="server" Display="Dynamic" CssClass="aspValidator" ControlToValidate="tbxEmailConfirm" SetFocusOnError="true" ValidationExpression="^(?:[a-zA-Z0-9_'^&amp;/+-])+(?:\.(?:[a-zA-Z0-9_'^&amp;/+-])+)*@(?:(?:\[?(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))\.){3}(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\]?)|(?:[a-zA-Z0-9-]+\.)+(?:[a-zA-Z]){2,}\.?)$" ErrorMessage="Email address not valid"></asp:RegularExpressionValidator> <asp:CompareValidator ID="CompareValidator1" runat="server" Display="Dynamic" SetFocusOnError="true" CssClass="aspValidator" ControlToCompare="Email" ControlToValidate="tbxEmailConfirm" ErrorMessage="Email address' do not match"></asp:CompareValidator> <asp:Label ID="lblUsername" runat="server" Text="Username:" AssociatedControlID="UserName"></asp:Label> <asp:TextBox ID="UserName" runat="server" MaxLength="12"></asp:TextBox><br /> <asp:CustomValidator ID="CustomValidatorUserName" runat="server" Display="Dynamic" SetFocusOnError="true" CssClass="aspValidator" ValidateEmptyText="true" ControlToValidate="UserName" ErrorMessage="Username can be between 6 and 12 characters." ClientValidationFunction="ValidateLength" OnServerValidate="ValidateUserName"></asp:CustomValidator> <div class="fieldLine"> <asp:Label ID="lblPassword" runat="server" Text="Password:" AssociatedControlID="Password"></asp:Label> <asp:Label ID="lblPasswordConfirm" runat="server" Text="Confirm Password:" AssociatedControlID="ConfirmPassword" CssClass="confirmPassword"></asp:Label> </div> <div class="fieldLine"> <asp:TextBox ID="Password" runat="server" TextMode="Password"></asp:TextBox> <asp:TextBox ID="ConfirmPassword" runat="server" TextMode="Password"></asp:TextBox><br /> <asp:CustomValidator ID="CustomValidatorPassword" runat="server" Display="Dynamic" SetFocusOnError="true" CssClass="aspValidator" ControlToValidate="Password" ValidateEmptyText="true" ErrorMessage="Password can be between 6 and 12 characters" ClientValidationFunction="ValidateLength" OnServerValidate="ValidatePassword"></asp:CustomValidator> <asp:CustomValidator ID="CustomValidatorConfirmPassword" runat="server" Display="Dynamic" SetFocusOnError="true" CssClass="aspValidator" ControlToValidate="ConfirmPassword" ValidateEmptyText="true" ErrorMessage="Password can be between 6 and 12 characters" ClientValidationFunction="ValidateLength" OnServerValidate="ValidatePassword"></asp:CustomValidator> <asp:CompareValidator ID="CompareValidator2" runat="server" Enabled="false" Display="Dynamic" SetFocusOnError="true" CssClass="aspValidator" ControlToCompare="Password" ControlToValidate="ConfirmPassword" ErrorMessage="Passwords do not match"></asp:CompareValidator> </div> <asp:Label ID="lblCaptch" runat="server" Text="Captcha:" AssociatedControlID="imgCaptcha"></asp:Label> <div class="borderBlue" style="width:200px;"> <asp:Image ID="imgCaptcha" runat="server" ImageUrl="~/JpegImage.aspx" /><br /> </div> <asp:TextBox ID="tbxCaptcha" runat="server" CssClass="captchaText"></asp:TextBox> <asp:RequiredFieldValidator ControlToValidate="tbxCaptcha" CssClass="aspValidator" ID="RequiredFieldValidator3" runat="server" ErrorMessage="Required"></asp:RequiredFieldValidator> <asp:CustomValidator ID="CustomValidator1" ControlToValidate="tbxCaptcha" runat="server" OnServerValidate="ValidateCaptcha" ErrorMessage="Captcha incorrect"></asp:CustomValidator> </ContentTemplate> <CustomNavigationTemplate> <div style="float:left;"> <asp:Button ID="CreateUser" runat="server" Text="Register Now!" CausesValidation="true" CommandName="CreateUser" OnCommand="CreateUserClick" CssClass="registerButton" /> </div> </CustomNavigationTemplate> </asp:CreateUserWizardStep> <asp:CompleteWizardStep ID="CompleteWizardStep1" runat="server"> <ContentTemplate> <table border="0" style="font-size: 100%; font-family: Verdana" id="TABLE1" > <tr> <td align="center" colspan="2" style="font-weight: bold; color: white; background-color: #5d7b9d; height: 18px;"> Complete</td> </tr> <tr> <td> Your account has been successfully created.<br /> </td> </tr> <tr> <td align="right" colspan="2"> <asp:Button ID="Button1" PostBackUrl="~/home.aspx" runat="server" Text="Button" /> </td> </tr> </table> </ContentTemplate> </asp:CompleteWizardStep> </WizardSteps> </asp:CreateUserWizard>

    Read the article

  • Magento - edit form in custom module grid

    - by Shani1351
    I have a custom module and I have a working grid to menage the module items in the admin. My module file structore is : app\code\local\G4R\GroupSales\Block\Adminhtml\Groupsale\ I want to add an edit form so I can view and edit each item in the grid. I followed this tutorial : http://www.magentocommerce.com/wiki/5_-_modules_and_development/0_-_module_development_in_magento/custom_module_with_custom_database_table#part_2_-_backend_administration but when the edit page loads, instead of the tab content I get an error : Fatal error: Call to a member function setData() on a non-object in C:\xampp\htdocs\mystore\app\code\core\Mage\Adminhtml\Block\Widget\Form\Container.php on line 129 This is my code : /app/code/local/G4R/GroupSales/Block/Adminhtml/Groupsale/Edit.php <?php class G4R_GroupSales_Block_Adminhtml_Groupsale_Edit extends Mage_Adminhtml_Block_Widget_Form_Container { public function __construct() { parent::__construct(); $this->_objectId = 'id'; $this->_blockGroup = 'groupsale'; $this->_controller = 'adminhtml_groupsales'; $this->_updateButton('save', 'label', Mage::helper('groupsales')->__('Save Item')); $this->_updateButton('delete', 'label', Mage::helper('groupsales')->__('Delete Item')); } public function getHeaderText() { if( Mage::registry('groupsale_data') && Mage::registry('groupsale_data')->getId() ) { return Mage::helper('groupsales')->__("Edit Item '%s'", $this->htmlEscape(Mage::registry('groupsale_data')->getTitle())); } else { return Mage::helper('groupsales')->__('Add Item'); } } } /app/code/local/G4R/GroupSales/Block/Adminhtml/Groupsale/Edit/Form.php : <?php class G4R_GroupSales_Block_Adminhtml_Groupsale_Edit_Form extends Mage_Adminhtml_Block_Widget_Form { protected function _prepareForm() { $form = new Varien_Data_Form(array( 'id' => 'edit_form', 'action' => $this->getUrl('*/*/save', array('id' => $this->getRequest()->getParam('id'))), 'method' => 'post', ) ); $form->setUseContainer(true); $this->setForm($form); return parent::_prepareForm(); } } /app/code/local/G4R/GroupSales/Block/Adminhtml/Groupsale/Edit/Tabs.php: <?php class G4R_GroupSales_Block_Adminhtml_Groupsale_Edit_Tabs extends Mage_Adminhtml_Block_Widget_Tabs { public function __construct() { parent::__construct(); $this->setId('groupsales_groupsale_tabs'); $this->setDestElementId('edit_form'); $this->setTitle(Mage::helper('groupsales')->__('Groupsale Information')); } protected function _beforeToHtml() { $this->addTab('form_section', array( 'label' => Mage::helper('groupsales')->__('Item Information 1'), 'title' => Mage::helper('groupsales')->__('Item Information 2'), 'content' => $this->getLayout()->createBlock('groupsales/adminhtml_groupsale_edit_tab_form')->toHtml(), )); return parent::_beforeToHtml(); } } /app/code/local/G4R/GroupSales/Block/Adminhtml/Groupsale/Edit/Tab/Form.php : <?php class G4R_GroupSales_Block_Adminhtml_Groupsale_Edit_Tab_Form extends Mage_Adminhtml_Block_Widget_Form { protected function _prepareForm() { $form = new Varien_Data_Form(); $this->setForm($form); $fieldset = $form->addFieldset('groupsales_form', array('legend'=>Mage::helper('groupsales')->__('Item information 3'))); // $fieldset->addField('title', 'text', array( // 'label' => Mage::helper('groupsales')->__('Title'), // 'class' => 'required-entry', // 'required' => true, // 'name' => 'title', // )); // if ( Mage::getSingleton('adminhtml/session')->getGroupsaleData() ) { $form->setValues(Mage::getSingleton('adminhtml/session')->getGroupsaleData()); Mage::getSingleton('adminhtml/session')->setGroupsaleData(null); } elseif ( Mage::registry('groupsale_data') ) { $form->setValues(Mage::registry('groupsale_data')->getData()); } return parent::_prepareForm(); } } /app/code/local/G4R/GroupSales/controllers/Adminhtml/GroupsaleController.php : <?php class G4R_GroupSales_Adminhtml_GroupsaleController extends Mage_Adminhtml_Controller_Action { protected function _initAction() { $this->loadLayout() ->_setActiveMenu('groupsale/items') ->_addBreadcrumb(Mage::helper('adminhtml')->__('Items Manager'), Mage::helper('adminhtml')->__('Item Manager')); return $this; } public function indexAction() { $this->_initAction(); $this->_addContent($this->getLayout()->createBlock('groupsales/adminhtml_groupsale')); $this->renderLayout(); } public function editAction() { $groupsaleId = $this->getRequest()->getParam('id'); $groupsaleModel = Mage::getModel('groupsales/groupsale')->load($groupsaleId); if ($groupsaleModel->getId() || $groupsaleId == 0) { Mage::register('groupsale_data', $groupsaleModel); $this->loadLayout(); $this->_setActiveMenu('groupsale/items'); $this->_addBreadcrumb(Mage::helper('adminhtml')->__('Item Manager'), Mage::helper('adminhtml')->__('Item Manager')); $this->_addBreadcrumb(Mage::helper('adminhtml')->__('Item News'), Mage::helper('adminhtml')->__('Item News')); $this->getLayout()->getBlock('head')->setCanLoadExtJs(true); $this->_addContent($this->getLayout()->createBlock('groupsales/adminhtml_groupsale_edit')) ->_addLeft($this->getLayout()->createBlock('groupsales/adminhtml_groupsale_edit_tabs')); $this->renderLayout(); } else { Mage::getSingleton('adminhtml/session')->addError(Mage::helper('groupsales')->__('Item does not exist')); $this->_redirect('*/*/'); } } public function newAction() { $this->_forward('edit'); } public function saveAction() { if ( $this->getRequest()->getPost() ) { try { $postData = $this->getRequest()->getPost(); $groupsaleModel = Mage::getModel('groupsales/groupsale'); $groupsaleModel->setId($this->getRequest()->getParam('id')) ->setTitle($postData['title']) ->setContent($postData['content']) ->setStatus($postData['status']) ->save(); Mage::getSingleton('adminhtml/session')->addSuccess(Mage::helper('adminhtml')->__('Item was successfully saved')); Mage::getSingleton('adminhtml/session')->setGroupsaleData(false); $this->_redirect('*/*/'); return; } catch (Exception $e) { Mage::getSingleton('adminhtml/session')->addError($e->getMessage()); Mage::getSingleton('adminhtml/session')->setGroupsaleData($this->getRequest()->getPost()); $this->_redirect('*/*/edit', array('id' => $this->getRequest()->getParam('id'))); return; } } $this->_redirect('*/*/'); } public function deleteAction() { if( $this->getRequest()->getParam('id') > 0 ) { try { $groupsaleModel = Mage::getModel('groupsales/groupsale'); $groupsaleModel->setId($this->getRequest()->getParam('id')) ->delete(); Mage::getSingleton('adminhtml/session')->addSuccess(Mage::helper('adminhtml')->__('Item was successfully deleted')); $this->_redirect('*/*/'); } catch (Exception $e) { Mage::getSingleton('adminhtml/session')->addError($e->getMessage()); $this->_redirect('*/*/edit', array('id' => $this->getRequest()->getParam('id'))); } } $this->_redirect('*/*/'); } /** * Product grid for AJAX request. * Sort and filter result for example. */ public function gridAction() { $this->loadLayout(); $this->getResponse()->setBody( $this->getLayout()->createBlock('importedit/adminhtml_groupsales_grid')->toHtml() ); } } Any ideas what is the cause for the error?

    Read the article

  • ORDER BY job failed in the Pig script while running EmbeddedPig using Java

    - by C.c. Huang
    I have this following pig script, which works perfectly using grunt shell (stored the results to HDFS without any issues); however, the last job (ORDER BY) failed if I ran the same script using Java EmbeddedPig. If I replace the ORDER BY job by others, such as GROUP or FOREACH GENERATE, the whole script then succeeded in Java EmbeddedPig. So I think it's the ORDER BY which causes the issue. Anyone has any experience with this? Any help would be appreciated! The Pig script: REGISTER pig-udf-0.0.1-SNAPSHOT.jar; user_similarity = LOAD '/tmp/sample-sim-score-results-31/part-r-00000' USING PigStorage('\t') AS (user_id: chararray, sim_user_id: chararray, basic_sim_score: float, alt_sim_score: float); simplified_user_similarity = FOREACH user_similarity GENERATE $0 AS user_id, $1 AS sim_user_id, $2 AS sim_score; grouped_user_similarity = GROUP simplified_user_similarity BY user_id; ordered_user_similarity = FOREACH grouped_user_similarity { sorted = ORDER simplified_user_similarity BY sim_score DESC; top = LIMIT sorted 10; GENERATE group, top; }; top_influencers = FOREACH ordered_user_similarity GENERATE com.aol.grapevine.similarity.pig.udf.AssignPointsToTopInfluencer($1, 10); all_influence_scores = FOREACH top_influencers GENERATE FLATTEN($0); grouped_influence_scores = GROUP all_influence_scores BY bag_of_topSimUserTuples::user_id; influence_scores = FOREACH grouped_influence_scores GENERATE group AS user_id, SUM(all_influence_scores.bag_of_topSimUserTuples::points) AS influence_score; ordered_influence_scores = ORDER influence_scores BY influence_score DESC; STORE ordered_influence_scores INTO '/tmp/cc-test-results-1' USING PigStorage(); The error log from Pig: 12/04/05 10:00:56 INFO pigstats.ScriptState: Pig script settings are added to the job 12/04/05 10:00:56 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 12/04/05 10:00:58 INFO mapReduceLayer.JobControlCompiler: Setting up single store job 12/04/05 10:00:58 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 12/04/05 10:00:58 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission. 12/04/05 10:00:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/04/05 10:00:58 INFO input.FileInputFormat: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths (combined) to process : 1 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating tmp-1546565755 in /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134-work-6955502337234509704 with rwxr-xr-x 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir. 12/04/05 10:00:58 INFO mapred.TaskRunner: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/pigsample_854728855_1333645258470 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.jar.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.jar.crc 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.split.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.split.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.splitmetainfo.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.splitmetainfo.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.xml.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.xml.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.jar <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.jar 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.split <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.split 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.splitmetainfo <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.splitmetainfo 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.xml <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.xml 12/04/05 10:00:59 INFO mapred.Task: Using ResourceCalculatorPlugin : null 12/04/05 10:00:59 INFO mapred.MapTask: io.sort.mb = 100 12/04/05 10:00:59 INFO mapred.MapTask: data buffer = 79691776/99614720 12/04/05 10:00:59 INFO mapred.MapTask: record buffer = 262144/327680 12/04/05 10:00:59 WARN mapred.LocalJobRunner: job_local_0004 java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:139) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigFileInputFormat.listStatus(PigFileInputFormat.java:37) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:153) at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:115) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:112) ... 6 more 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Deleted path /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:59 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: job job_local_0004 has failed! Stop running all dependent jobs 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: 100% complete 12/04/05 10:01:04 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed! 12/04/05 10:01:04 INFO pigstats.PigStats: Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 0.20.2-cdh3u3 0.8.1-cdh3u3 cchuang 2012-04-05 10:00:34 2012-04-05 10:01:04 GROUP_BY,ORDER_BY Some jobs have failed! Stop running all dependent jobs Job Stats (time in seconds): JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MaxReduceTime MinReduceTime AvgReduceTime Alias Feature Outputs job_local_0001 0 0 0 0 0 0 0 0 all_influence_scores,grouped_user_similarity,simplified_user_similarity,user_similarity GROUP_BY job_local_0002 0 0 0 0 0 0 0 0 grouped_influence_scores,influence_scores GROUP_BY,COMBINER job_local_0003 0 0 0 0 0 0 0 0 ordered_influence_scores SAMPLER Failed Jobs: JobId Alias Feature Message Outputs job_local_0004 ordered_influence_scores ORDER_BY Message: Job failed! Error - NA /tmp/cc-test-results-1, Input(s): Successfully read 0 records from: "/tmp/sample-sim-score-results-31/part-r-00000" Output(s): Failed to produce result in "/tmp/cc-test-results-1" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local_0001 -> job_local_0002, job_local_0002 -> job_local_0003, job_local_0003 -> job_local_0004, job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: Some jobs have failed! Stop running all dependent jobs

    Read the article

  • Maven: Unresolved references to [org.osgi.service.http]

    - by Simone Vellei
    I'm trying to create a bundle using HttpService for register Servlet using maven-bundle-plugin. The pom.xml of the project is: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>felix-tutorial</groupId> <artifactId>example-1</artifactId> <version>1.0</version> <packaging>bundle</packaging> <name>Apache Felix Tutorial Example 1</name> <description>Apache Felix Tutorial Example 1</description> <!-- Build Configuration --> <build> <plugins> <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <extensions>true</extensions> <configuration> <instructions> <Bundle-SymbolicName>${pom.groupId}.${pom.artifactId}</Bundle-SymbolicName> <Bundle-Name>Service listener example</Bundle-Name> <Bundle-Description>A bundle that displays messages at startup and when service events occur</Bundle-Description> <Bundle-Vendor>Apache Felix</Bundle-Vendor> <Bundle-Version>1.0.0</Bundle-Version> <Bundle-Activator>tutorial.example1.Activator</Bundle-Activator> <Import-Package>org.osgi.framework;version="1.0.0", javax.servlet, javax.servlet.http</Import-Package> </instructions> </configuration> </plugin> </plugins> </build> <!-- Dependecies Management --> <dependencies> <dependency> <groupId>org.apache.felix</groupId> <artifactId>org.apache.felix.framework</artifactId> <version>2.0.4</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.felix</groupId> <artifactId>org.apache.felix.http.api</artifactId> <version>2.0.4</version> </dependency> <dependency> <groupId>org.apache.felix</groupId> <artifactId>org.apache.felix.http.base</artifactId> <version>2.0.4</version> </dependency> <dependency> <groupId>org.apache.felix</groupId> <artifactId>org.apache.felix.http.bridge</artifactId> <version>2.0.4</version> </dependency> <dependency> <groupId>org.apache.felix</groupId> <artifactId>org.apache.felix.http.bundle</artifactId> <version>2.0.4</version> </dependency> <dependency> <groupId>org.apache.felix</groupId> <artifactId>org.apache.felix.http.proxy</artifactId> <version>2.0.4</version> </dependency> <dependency> <groupId>org.apache.felix</groupId> <artifactId>org.apache.felix.http.whiteboard</artifactId> <version>2.0.4</version> </dependency> <dependency> <groupId>org.osgi</groupId> <artifactId>osgi_R4_compendium</artifactId> <version>1.0</version> </dependency> </dependencies> </project> "mvn install" command returns the following error: [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building Apache Felix Tutorial Example 1 [INFO] task-segment: [install] [INFO] ------------------------------------------------------------------------ Downloading: http://repo1.maven.org/maven2/org/apache/maven/plugins/maven-resources-plugin/2.3/maven-resources-plugin-2.3.pom Downloading: http://repo1.maven.org/maven2/org/apache/maven/plugins/maven-resources-plugin/2.3/maven-resources-plugin-2.3.jar Downloading: http://repo1.maven.org/maven2/org/apache/maven/plugins/maven-install-plugin/2.2/maven-install-plugin-2.2.pom Downloading: http://repo1.maven.org/maven2/org/apache/maven/plugins/maven-install-plugin/2.2/maven-install-plugin-2.2.jar Downloading: http://repo1.maven.org/maven2/org/apache/maven/shared/maven-filtering/1.0-beta-2/maven-filtering-1.0-beta-2.pom Downloading: http://repo1.maven.org/maven2/org/codehaus/plexus/plexus-interpolation/1.6/plexus-interpolation-1.6.pom Downloading: http://repo1.maven.org/maven2/org/codehaus/plexus/plexus-interpolation/1.6/plexus-interpolation-1.6.jar Downloading: http://repo1.maven.org/maven2/org/apache/maven/shared/maven-filtering/1.0-beta-2/maven-filtering-1.0-beta-2.jar [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] skip non existing resourceDirectory C:\eclipse\ws\stripes-bundle\src\main\resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date [INFO] [resources:testResources {execution: default-testResources}] [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] skip non existing resourceDirectory C:\eclipse\ws\stripes-bundle\src\test\resources [INFO] [compiler:testCompile {execution: default-testCompile}] [INFO] Nothing to compile - all classes are up to date [INFO] [surefire:test {execution: default-test}] [INFO] Surefire report directory: C:\eclipse\ws\stripes-bundle\target\surefire-reports ------------------------------------------------------- T E S T S ------------------------------------------------------- Running com.beanopoly.stripes.AppTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.031 sec Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 [INFO] [bundle:bundle {execution: default-bundle}] [ERROR] Error building bundle felix-tutorial:example-1:bundle:1.0 : Unresolved references to [org.osgi.service.http] by class(es) on the Bundle-Classpath[Jar:do [ERROR] Error(s) found in bundle configuration [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error(s) found in bundle configuration [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 12 seconds [INFO] Finished at: Sat Mar 27 13:11:47 CET 2010 [INFO] Final Memory: 12M/21M [INFO] ------------------------------------------------------------------------

    Read the article

  • storing session data in mysql using php is not retrieving the data properly from the tables.

    - by Ronedog
    I have a problem retrieving some data from the $_SESSION using php and mysql. I've commented out the line in php.ini that tells the server to use the "file" to store the session info so my database will be used. I have a class that I use to write the information to the database and its working fine. When the user passes their credentials the class gets instantiated and the $_SESSION vars get set, then the user gets redirected to the index page. The index.php page includes the file where the db session class is, which when instantiated calles session_start() and the session variables should be in $_SESSION, but when I do var_dump($_SESSION) there is nothing in the array. However, when I look at the data in mysql, all the session information is in there. Its acting like session_start() has not been called, but by instantiating the class it is. Any idea what could be wrong? Here's the HTML: <?php include_once "classes/phpsessions_db/class.dbsession.php"; //used for sessions var_dump($_SESSION); ?> <html> . . . </html> Here's the dbsession class: <?php error_reporting(E_ALL); class dbSession { function dbSession($gc_maxlifetime = "", $gc_probability = "", $gc_divisor = "") { // if $gc_maxlifetime is specified and is an integer number if ($gc_maxlifetime != "" && is_integer($gc_maxlifetime)) { // set the new value @ini_set('session.gc_maxlifetime', $gc_maxlifetime); } // if $gc_probability is specified and is an integer number if ($gc_probability != "" && is_integer($gc_probability)) { // set the new value @ini_set('session.gc_probability', $gc_probability); } // if $gc_divisor is specified and is an integer number if ($gc_divisor != "" && is_integer($gc_divisor)) { // set the new value @ini_set('session.gc_divisor', $gc_divisor); } // get session lifetime $this->sessionLifetime = ini_get("session.gc_maxlifetime"); //Added by AARON. cancel the session's auto start,important, without this the session var's don't show up on next pg. session_write_close(); // register the new handler session_set_save_handler( array(&$this, 'open'), array(&$this, 'close'), array(&$this, 'read'), array(&$this, 'write'), array(&$this, 'destroy'), array(&$this, 'gc') ); register_shutdown_function('session_write_close'); // start the session @session_start(); } function stop() { $new_sess_id = $this->regenerate_id(true); session_unset(); session_destroy(); return $new_sess_id; } function regenerate_id($return_val=false) { // saves the old session's id $oldSessionID = session_id(); // regenerates the id // this function will create a new session, with a new id and containing the data from the old session // but will not delete the old session session_regenerate_id(); // because the session_regenerate_id() function does not delete the old session, // we have to delete it manually //$this->destroy($oldSessionID); //ADDED by aaron // returns the new session id if($return_val) { return session_id(); } } function open($save_path, $session_name) { // global $gf; // $gf->debug_this($gf, "GF: Opening Session"); // change the next values to match the setting of your mySQL database $mySQLHost = "localhost"; $mySQLUsername = "user"; $mySQLPassword = "pass"; $mySQLDatabase = "sessions"; $link = mysql_connect($mySQLHost, $mySQLUsername, $mySQLPassword); if (!$link) { die ("Could not connect to database!"); } $dbc = mysql_select_db($mySQLDatabase, $link); if (!$dbc) { die ("Could not select database!"); } return true; } function close() { mysql_close(); return true; } function read($session_id) { $result = @mysql_query(" SELECT session_data FROM session_data WHERE session_id = '".$session_id."' AND http_user_agent = '".$_SERVER["HTTP_USER_AGENT"]."' AND session_expire > '".time()."' "); // if anything was found if (is_resource($result) && @mysql_num_rows($result) > 0) { // return found data $fields = @mysql_fetch_assoc($result); // don't bother with the unserialization - PHP handles this automatically return unserialize($fields["session_data"]); } // if there was an error return an empty string - this HAS to be an empty string return ""; } function write($session_id, $session_data) { // global $gf; // first checks if there is a session with this id $result = @mysql_query(" SELECT * FROM session_data WHERE session_id = '".$session_id."' "); // if there is if (@mysql_num_rows($result) > 0) { // update the existing session's data // and set new expiry time $result = @mysql_query(" UPDATE session_data SET session_data = '".serialize($session_data)."', session_expire = '".(time() + $this->sessionLifetime)."' WHERE session_id = '".$session_id."' "); // if anything happened if (@mysql_affected_rows()) { // return true return true; } } else // if this session id is not in the database { // $gf->debug_this($gf, "inside dbSession, trying to write to db because session id was NOT in db"); $sql = " INSERT INTO session_data ( session_id, http_user_agent, session_data, session_expire ) VALUES ( '".serialize($session_id)."', '".$_SERVER["HTTP_USER_AGENT"]."', '".$session_data."', '".(time() + $this->sessionLifetime)."' ) "; // insert a new record $result = @mysql_query($sql); // if anything happened if (@mysql_affected_rows()) { // return an empty string return ""; } } // if something went wrong, return false return false; } function destroy($session_id) { // deletes the current session id from the database $result = @mysql_query(" DELETE FROM session_data WHERE session_id = '".$session_id."' "); // if anything happened if (@mysql_affected_rows()) { // return true return true; } // if something went wrong, return false return false; } function gc($maxlifetime) { // it deletes expired sessions from database $result = @mysql_query(" DELETE FROM session_data WHERE session_expire < '".(time() - $maxlifetime)."' "); } } //End of Class $session = new dbsession(); ?>

    Read the article

  • Codeigniter: Using URIs with forms

    - by Kevin Brown
    I'm using URIs to direct a function in a library: $id = $this->CI->session->userdata('id'); $URI = $this->CI->uri->uri_string(); $new = "new"; if(strpos($URI, $new) === FALSE){ $method = "update"; } elseif(strpos($URI, $new) !== FALSE){ $method = "create"; } So I have two if statements directing what information to if ($method === 'update') { // Modify form, first load $this->CI->db->from('be_survey'); $this->CI->db->where('user_id' , $id); $survey = $this->CI->db->get(); $user = array_merge($user->row_array(),$survey->row_array()); $this->CI->validation->set_default_value($user); // Display page $data['user'] = $user; } $this->CI->validation->set_rules($rules); if ( $this->CI->validation->run() === FALSE ) { // Output any errors $this->CI->validation->output_errors(); } else { // Submit form $this->_submit($method); } Submit function: function _submit($method) { //Submit and Update for current User $id = $this->CI->session->userdata('id'); $this->CI->db->select('users.id, users.username, users.email, profiles.firstname, profiles.manager_id'); $this->CI->db->from('be_users' . " users"); $this->CI->db->join('be_user_profiles' . " profiles",'users.id=profiles.user_id'); $this->CI->db->having('id', $id); $email_data['user'] = $this->CI->db->get(); $email_data['user'] = $email_data['user']->row(); $manager_id = $email_data['user']->manager_id; $this->CI->db->select('firstname','email')->from('be_user_profiles')->where('user_id', $manager_id); $email_data['manager'] = $this->CI->db->get(); $email_data['manager'] = $email_data['manager']->row(); // Fetch what they entered in the form for($i=1;$i<18;$i++){ $survey["a_".$i]= $this->CI->input->post('a_'.$i); } for($i=1;$i<15;$i++){ $survey["b_".$i]= $this->CI->input->post('b_'.$i); } for($i=1;$i<12;$i++){ $survey["c_".$i]= $this->CI->input->post('c_'.$i); } $profile['firstname'] = $this->CI->input->post('firstname'); $profile['lastname'] = $this->CI->input->post('lastname'); $profile['test_date'] = date ("Y-m-d H:i:s"); $profile['company_name'] = $this->CI->input->post('company_name'); $profile['company_address'] = $this->CI->input->post('company_address'); $profile['company_city'] = $this->CI->input->post('company_city'); $profile['company_phone'] = $this->CI->input->post('company_phone'); $profile['company_state'] = $this->CI->input->post('company_state'); $profile['company_zip'] = $this->CI->input->post('company_zip'); $profile['job_title'] = $this->CI->input->post('job_title'); $profile['job_type'] = $this->CI->input->post('job_type'); $profile['job_time'] = $this->CI->input->post('job_time'); $profile['department'] = $this->CI->input->post('department'); $profile['vision'] = $this->CI->input->post('vision'); $profile['height'] = $this->CI->input->post('height'); $profile['weight'] = $this->CI->input->post('weight'); $profile['hand_dominance'] = $this->CI->input->post('hand_dominance'); $profile['areas_of_fatigue'] = $this->CI->input->post('areas_of_fatigue'); $profile['job_description'] = $this->CI->input->post('job_description'); $profile['injury_review'] = $this->CI->input->post('injury_review'); $profile['job_positive'] = $this->CI->input->post('job_positive'); $profile['risk_factors'] = $this->CI->input->post('risk_factors'); $profile['job_improvement_short'] = $this->CI->input->post('job_improvement_short'); $profile['job_improvement_long'] = $this->CI->input->post('job_improvement_long'); if ($method == "update") { //Begin db transmission $this->CI->db->trans_begin(); $this->CI->home_model->update('Survey',$survey, array('user_id' => $id)); $this->CI->db->update('be_user_profiles',$profile, array('user_id' => $id)); if ($this->CI->db->trans_status() === FALSE) { flashMsg('error','There was a problem entering your test! Please contact an administrator.'); redirect('survey','location'); } else { //Get credits of user and subtract 1 $this->CI->db->set('credits', 'credits -1', FALSE); $this->CI->db->update('be_user_profiles',$profile, array('user_id' => $manager_id)); //Mark the form completed. $this->CI->db->set('test_complete', '1'); $this->CI->db->where('user_id', $id)->update('be_user_profiles'); // Stuff worked... $this->CI->db->trans_commit(); //Get Manager Information $this->CI->db->select('users.id, users.username, users.email, profiles.firstname'); $this->CI->db->from('be_users' . " users"); $this->CI->db->join('be_user_profiles' . " profiles",'users.id=profiles.user_id'); $this->CI->db->having('id', $email_data['user']->manager_id); $email_data['manager'] = $this->CI->db->get(); $email_data['manager'] = $email_data['manager']->row(); //Email User $this->CI->load->library('User_email'); $data_user = array( 'firstname'=>$email_data['user']->firstname, 'email'=> $email_data['user']->email, 'user_completed'=>$email_data['user']->firstname, 'site_name'=>$this->CI->preference->item('site_name'), 'site_url'=>base_url() ); //Email Manager $data_manager = array( 'firstname'=>$email_data['manager']->firstname, 'email'=> $email_data['manager']->email, 'user_completed'=>$email_data['user']->firstname, 'site_name'=>$this->CI->preference->item('site_name'), 'site_url'=>base_url() ); $this->CI->user_email->send($email_data['manager']->email,'Completed the Assessment Tool','public/email_manager_complete',$data_manager); $this->CI->user_email->send($email_data['user']->email,'Completed the Assessment Tool','public/email_user_complete',$data_user); flashMsg('success','You finished the assessment successfully!'); redirect('home','location'); } } //Create New User elseif ($method == "create") { // Build $profile['user_id'] = $id; $profile['manager_id'] = $manager_id; $profile['test_complete'] = '1'; $survey['user_id'] = $id; $this->CI->db->trans_begin(); // Add user_profile details to DB $this->CI->db->insert('be_user_profiles',$profile); $this->CI->db->insert('be_survey',$survey); if ($this->CI->db->trans_status() === FALSE) { // Registration failed $this->CI->db->trans_rollback(); flashMsg('error',$this->CI->lang->line('userlib_registration_failed')); redirect('auth/register','location'); } else { // User registered $this->CI->db->trans_commit(); flashMsg('success',$this->CI->lang->line('userlib_registration_success')); redirect($this->CI->config->item('userlib_action_register'),'location'); } } } The submit function is similar, updating the db if $method == "update", and inserting if the method == "create". The problem is, when the form is submitted, it doesn't take into account the url b/c the form submits to the function "survey", which passes data to the lib function, so things are always updated, never created. *How can I pass $method to the _submit() function correctly?!*

    Read the article

< Previous Page | 174 175 176 177 178 179 180  | Next Page >