Search Results

Search found 25075 results on 1003 pages for 'default trace'.

Page 529/1003 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • ASP.NET Controls – CommunityServer Captcha ControlAdapter, a practical case

    - by nmgomes
    The ControlAdapter is available since .NET framework version 2.0 and his main goal is to adapt and customize a control render in order to achieve a specific behavior or layout. This customization is done without changing the base control. A ControlAdapter is commonly used to custom render for specific platforms like Mobile. In this particular case the ControlAdapter was used to add a specific behavior to a Control. In this  post I will use one adapter to add a Captcha to all WeblogPostCommentForm controls within pontonetpt.com CommunityServer instance. The Challenge The ControlAdapter complexity is usually associated with the complexity/structure of is base control. This case is precisely one of those since base control dynamically load his content (controls) thru several ITemplate. Those of you who already played with ITemplate knows that while it is an excellent option for control composition it also brings to the table a big issue: “Controls defined within a template are not available for manipulation until they are instantiated inside another control.” While analyzing the WeblogPostCommentForm control I found that he uses the ITemplate technique to compose it’s layout and unfortunately I also found that the template content vary from theme to theme. This could have been a problem but luckily WeblogPostCommentForm control template content always contains a submit button with a well known ID (at least I can assume that there are a well known set of IDs). Using this submit button as anchor it’s possible to add the Captcha controls in the correct place. Another important finding was that WeblogPostCommentForm control inherits from the WrappedFormBase control which is the base control for all CommunityServer input forms. Knowing this inheritance link the main goal has changed to became the creation of a base ControlAdapter that  could be extended and customized to allow adding Captcha to: post comments form contact form user creation form. And, with this mind set, I decided to used the following ControlAdapter base class signature :public abstract class WrappedFormBaseCaptchaAdapter<T> : ControlAdapter where T : WrappedFormBase { }Great, but there are still many to do … Captcha The Captcha will be assembled with: A dynamically generated image with a set of random numbers A TextBox control where the image number will be inserted A Validator control to validate whether TextBox numbers match the image numbers This is a common Captcha implementation, is not rocket science and don’t bring any additional problem. The main problem, as told before, is to find the correct anchor control to ensure a correct Captcha control injection. The anchor control can vary by: target control  theme Implementation To support this dynamic scenario I choose to use the following implementation:private List<string> _validAnchorIds = null; protected virtual List<string> ValidAnchorIds { get { if (this._validAnchorIds == null) { this._validAnchorIds = new List<string>(); this._validAnchorIds.Add("btnSubmit"); } return this._validAnchorIds; } } private Control GetAnchorControl(T wrapper) { if (this.ValidAnchorIds == null || this.ValidAnchorIds.Count == 0) { throw new ArgumentException("Cannot be null or empty", "validAnchorNames"); } var q = from anchorId in this.ValidAnchorIds let anchorControl = CSControlUtility.Instance().FindControl(wrapper, anchorId) where anchorControl != null select anchorControl; return q.FirstOrDefault(); } I can now, using the ValidAnchorIds property, configure a set of valid anchor control  Ids. The GetAnchorControl method searches for a valid anchor control within the set of valid control Ids. Here, some of you may question why to use a LINQ To Objects expression, but the important here is to notice the usage of CSControlUtility.Instance().FindControl CommunityServer method. I want to build on top of CommunityServer not to reinvent the wheel. Assuming that an anchor control was found, it’s now possible to inject the Captcha at the correct place. This not something new, we do this all the time when creating server controls or adding dynamic controls:protected sealed override void CreateChildControls() { base.CreateChildControls(); if (this.IsCaptchaRequired) { T wrapper = base.Control as T; if (wrapper != null) { Control anchorControl = GetAnchorControl(wrapper); if (anchorControl != null) { Panel phCaptcha = new Panel {CssClass = "CommonFormField", ID = "Captcha"}; int index = anchorControl.Parent.Controls.IndexOf(anchorControl); anchorControl.Parent.Controls.AddAt(index, phCaptcha); CaptchaConfiguration.DefaultProvider.AddCaptchaControls( phCaptcha, GetValidationGroup(wrapper, anchorControl)); } } } } Here you can see a new entity in action: a provider. This is a CaptchaProvider class instance and is only goal is to create the Captcha itself and do everything else is needed to ensure is correct operation.public abstract class CaptchaProvider : ProviderBase { public abstract void AddCaptchaControls(Panel captchaPanel, string validationGroup); } You can create your own specific CaptchaProvider class to use different Captcha strategies including the use of existing Captcha services  like ReCaptcha. Once the generic ControlAdapter was created became extremely easy to created a specific one. Here is the specific ControlAdapter for the WeblogPostCommentForm control:public class WeblogPostCommentFormCaptchaAdapter : WrappedFormBaseCaptchaAdapter<WrappedFormBase> { #region Overriden Methods protected override List<string> ValidAnchorIds { get { List<string> validAnchorNames = base.ValidAnchorIds; validAnchorNames.Add("CommentSubmit"); return validAnchorNames; } } protected override string DefaultValidationGroup { get { return "CreateCommentForm"; } } #endregion Overriden Methods } Configuration This is the magic step. Without changing the original pages and keeping the application original assemblies untouched we are going to add a new behavior to the CommunityServer application. To glue everything together you must follow this steps: Add the following configuration to default.browser file:<?xml version='1.0' encoding='utf-8'?> <browsers> <browser refID="Default"> <controlAdapters> <!-- Adapter for the WeblogPostCommentForm control in order to add the Captcha and prevent SPAM comments --> <adapter controlType="CommunityServer.Blogs.Controls.WeblogPostCommentForm" adapterType="NunoGomes.CommunityServer.Components.WeblogPostCommentFormCaptchaAdapter, NunoGomes.CommunityServer" /> </controlAdapters> </browser> </browsers> Add the following configuration to web.config file:<configuration> <configSections> <!-- New section for Captcha providers configuration --> <section name="communityServer.Captcha" type="NunoGomes.CommunityServer.Captcha.Configuration.CaptchaSection" /> </configSections> <!-- Configuring a simple Captcha provider --> <communityServer.Captcha defaultProvider="simpleCaptcha"> <providers> <add name="simpleCaptcha" type="NunoGomes.CommunityServer.Captcha.Providers.SimpleCaptchaProvider, NunoGomes.CommunityServer" imageUrl="~/captcha.ashx" enabled="true" passPhrase="_YourPassPhrase_" saltValue="_YourSaltValue_" hashAlgorithm="SHA1" passwordIterations="3" keySize="256" initVector="_YourInitVectorWithExactly_16_Bytes_" /> </providers> </communityServer.Captcha> <system.web> <httpHandlers> <!-- The Captcha Image handler used by the simple Captcha provider --> <add verb="GET" path="captcha.ashx" type="NunoGomes.CommunityServer.Captcha.Providers.SimpleCaptchaProviderImageHandler, NunoGomes.CommunityServer" /> </httpHandlers> </system.web> <system.webServer> <handlers accessPolicy="Read, Write, Script, Execute"> <!-- The Captcha Image handler used by the simple Captcha provider --> <add verb="GET" name="captcha" path="captcha.ashx" type="NunoGomes.CommunityServer.Captcha.Providers.SimpleCaptchaProviderImageHandler, NunoGomes.CommunityServer" /> </handlers> </system.webServer> </configuration> Conclusion Building a ControlAdapter can be complex but the reward is his ability to allows us, thru configuration changes, to modify an application render and/or behavior. You can see this ControlAdapter in action here and here (anonymous required). A complete solution is available in “CommunityServer Extensions” Codeplex project.

    Read the article

  • PPTP connection disconnect

    - by Vladimir Franciz S. Blando
    My pptp connection wont stay connected, it will disconnect in less than a minute here are some relevant log entries May 31 13:32:31 localhost NetworkManager[931]: <info> Starting VPN service 'pptp'... May 31 13:32:31 localhost NetworkManager[931]: <info> VPN service 'pptp' started (org.freedesktop.NetworkManager.pptp), PID 15216 May 31 13:32:31 localhost NetworkManager[931]: <info> VPN service 'pptp' appeared; activating connections May 31 13:32:31 localhost NetworkManager[931]: <info> VPN plugin state changed: init (1) May 31 13:32:31 localhost NetworkManager[931]: <info> VPN plugin state changed: starting (3) May 31 13:32:31 localhost NetworkManager[931]: <info> VPN connection 'Dynalabs' (Connect) reply received. May 31 13:32:31 localhost pppd[15221]: Plugin /usr/lib/pppd/2.4.5/nm-pptp-pppd-plugin.so loaded. May 31 13:32:31 localhost pppd[15221]: pppd 2.4.5 started by root, uid 0 May 31 13:32:31 localhost pptp[15224]: nm-pptp-service-15216 log[main:pptp.c:314]: The synchronous pptp option is NOT activated May 31 13:32:31 localhost pppd[15221]: Using interface ppp0 May 31 13:32:31 localhost pppd[15221]: Connect: ppp0 <--> /dev/pts/5 May 31 13:32:31 localhost NetworkManager[931]: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) May 31 13:32:31 localhost NetworkManager[931]: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. May 31 13:32:32 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request' May 31 13:32:32 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply May 31 13:32:32 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established. May 31 13:32:33 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' May 31 13:32:34 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. May 31 13:32:34 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 1536). May 31 13:32:37 localhost pppd[15221]: CHAP authentication succeeded May 31 13:32:37 localhost kernel: [54007.078553] PPP MPPE Compression module registered May 31 13:32:40 localhost pppd[15221]: MPPE 128-bit stateless compression enabled May 31 13:32:42 localhost pppd[15221]: local IP address 10.100.0.52 May 31 13:32:42 localhost pppd[15221]: remote IP address 10.100.0.1 May 31 13:32:42 localhost pppd[15221]: primary DNS address 4.2.2.1 May 31 13:32:42 localhost pppd[15221]: secondary DNS address 255.255.255.255 May 31 13:32:42 localhost NetworkManager[931]: <info> VPN connection 'Dynalabs' (IP Config Get) reply received. May 31 13:32:42 localhost NetworkManager[931]: <info> VPN Gateway: 103.28.219.2 May 31 13:32:42 localhost NetworkManager[931]: <info> Tunnel Device: ppp0 May 31 13:32:42 localhost NetworkManager[931]: <info> Internal IP4 Address: 10.100.0.52 May 31 13:32:42 localhost NetworkManager[931]: <info> Internal IP4 Prefix: 32 May 31 13:32:42 localhost NetworkManager[931]: <info> Internal IP4 Point-to-Point Address: 10.100.0.1 May 31 13:32:42 localhost NetworkManager[931]: <info> Maximum Segment Size (MSS): 0 May 31 13:32:42 localhost NetworkManager[931]: <info> Forbid Default Route: no May 31 13:32:42 localhost NetworkManager[931]: <info> Internal IP4 DNS: 4.2.2.1 May 31 13:32:42 localhost NetworkManager[931]: <info> Internal IP4 DNS: 255.255.255.255 May 31 13:32:42 localhost NetworkManager[931]: <info> DNS Domain: '(none)' May 31 13:32:43 localhost dnsmasq[2127]: exiting on receipt of SIGTERM May 31 13:32:43 localhost NetworkManager[931]: <info> DNS: starting dnsmasq... May 31 13:32:43 localhost NetworkManager[931]: <info> (ppp0): writing resolv.conf to /sbin/resolvconf May 31 13:32:43 localhost dnsmasq[15290]: error at line 2 of /var/run/nm-dns-dnsmasq.conf May 31 13:32:43 localhost dnsmasq[15290]: FAILED to start up May 31 13:32:43 localhost NetworkManager[931]: <info> VPN connection 'Dynalabs' (IP Config Get) complete. May 31 13:32:43 localhost NetworkManager[931]: <info> Policy set 'Dynalabs' (ppp0) as default for IPv4 routing and DNS. May 31 13:32:43 localhost NetworkManager[931]: <info> VPN plugin state changed: started (4) May 31 13:32:43 localhost NetworkManager[931]: <warn> dnsmasq exited with error: Configuration problem (1) May 31 13:32:43 localhost NetworkManager[931]: <info> (ppp0): writing resolv.conf to /sbin/resolvconf May 31 13:32:43 localhost dbus[872]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) May 31 13:32:43 localhost dbus[872]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' May 31 13:33:00 localhost ntpdate[15370]: step time server 91.189.94.4 offset -1.110301 sec May 31 13:33:21 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xd6d6 May 31 13:33:21 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x93aa May 31 13:33:21 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xcc83 May 31 13:33:21 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x2031 May 31 13:33:21 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x13d4 May 31 13:33:22 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x5b11 May 31 13:33:22 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x414b May 31 13:33:22 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x2f5f May 31 13:33:22 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xe9ff May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x8e20 May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x8f0 May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf166 May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x36e6 May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xdd19 May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xda26 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xac5 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x53a5 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x507e May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x1dc5 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf87b May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x2f27 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xd10c May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x66ef May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xa294 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xb15 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x52a2 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xd863 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x8a96 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xde19 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x9763 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xb23 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x83ca May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x964e May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xe8ae May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf614 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x9b1 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf086 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xbff4 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x66c5 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xe42 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf295 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x86fe May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x3bc1 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xbaad May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x88b5 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xd7a May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x30d5 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x2d8f May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x3933 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x8d42 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x4b4 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xa205 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x7cc5 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x1b6a May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf004 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x21b6 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x51eb

    Read the article

  • Preserving Permalinks

    - by Daniel Moth
    One of the things that gets me on a rant is websites that break permalinks. If you have posted something somewhere and there is a public URL pointing to it, that URL should never ever return a 404. You are breaking all websites that ever linked to you and you are breaking all search engine links to your content (that others will try and follow). It is a pet peeve of mine. So when I had to move my blog, obviously I would preserve the root URL (www.danielmoth.com/Blog/), but I also wanted to preserve every URL my blog has generated over the years. To be clear, our focus here is on the URL formatting, not the content migration which I'll talk about in my next post. In this post, I'll describe my solution first and then what it solves. 1. The IIS7 Rewrite Module and web.config There are a few ways you can map an old URL to a new one (so when requests to the old URL come in, they get redirected to the new one). The new blog engine I use (dasBlog) has built-in functionality to do that (Scott refers to it here). Instead, the way I chose to address the issue was to use the IIS7 rewrite module. The IIS7 rewrite module allows redirecting URLs based on pattern matching, regular expressions and, of course, hardcoded full URLs for things that don't fall into any pattern. You can configure it visually from IIS Manager using a handy dialog that allows testing patterns against input URLs. Here is what mine looked like after configuring a few rules: To learn more about this technology check out this video, the reference page and this overview blog post; all 3 pages have a collection of related resources at the bottom worth checking out too. All the visual configuration ends up in a web.config file at the root folder of your website. If you are on a shared hosting service, probably the only way you can use the Rewrite Module is by directly editing the web.config file. Next, I'll describe the URLs I had to map and how that manifested itself in the web.config file. What I did was create the rules locally using the GUI, and then took the generated web.config file and uploaded it to my live site. You can view my web.config here. 2. Monthly Archives Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004_07_01_mothblog_archive.html dasBlog: /Blog/default,month,2004-07.aspx In my web.config file, the rule that deals with this is the one named "monthlyarchive_redirect". 3. Categories Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/labels/Personal.html dasBlog: /Blog/CategoryView,category,Personal.aspx In my web.config file the rule that deals with this is the one named "category_redirect". 4. Posts Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004/07/hello-world.html dasBlog: /Blog/Hello-World.aspx In my web.config file the rule that deals with this is the one named "post_redirect". Note: The decision is taken to use dasBlog URLs that do not include the date info (see the description of my Appearance settings). If we included the date info then it would have to include the day part, which blogger did not generate. This makes it impossible to redirect correctly and to have a single permalink for blog posts moving forward. An implication of this decision, is that no two blog posts can have the same title. The tool I will describe in my next post (inelegantly) deals with duplicates, but not with triplicates or higher. 5. Unhandled by a generic rule Unfortunately, the two blog engines use different rules for generating URLs for blog posts. Most of the time the conversion is as simple as the example of the previous section where a post titled "Hello World" generates a URL with the words separated by a hyphen. Some times that is not the case, for example: /Blog/2006/05/medc-wrap-up.html /Blog/MEDC-Wrapup.aspx or /Blog/2005/01/best-of-moth-2004.html /Blog/Best-Of-The-Moth-2004.aspx or /Blog/2004/11/more-windows-mobile-2005-details.html /Blog/More-Windows-Mobile-2005-Details-Emerge.aspx In short, blogger does not add words to the title beyond ~39 characters, it drops some words from the title generation (e.g. a, an, on, the), and it preserve hyphens that appear in the title. For this reason, we need to detect these and explicitly list them for redirects (no regular expression can help here because the full set of rules is not listed anywhere). In my web.config file the rule that deals with this is the one named "Redirect rule1 for FullRedirects" combined with the rewriteMap named "StaticRedirects". Note: The tool I describe in my next post will detect all the URLs that need to be explicitly redirected and will list them in a file ready for you to copy them to your web.config rewriteMap. 6. C# code doing the same as the web.config I wrote some naive code that does the same thing as the web.config: given a string it will return a new string converted according to the 3 rules above. It does not take into account the 4th case where an explicit hard-coded conversion is needed (the tool I present in the next post does take that into account). static string REGEX_post_redirect = "[0-9]{4}/[0-9]{2}/([0-9a-z-]+).html"; static string REGEX_category_redirect = "labels/([_0-9a-z-% ]+).html"; static string REGEX_monthlyarchive_redirect = "([0-9]{4})_([0-9]{2})_[0-9]{2}_mothblog_archive.html"; static string Redirect(string oldUrl) { GroupCollection g; if (RunRegExOnIt(oldUrl, REGEX_post_redirect, 2, out g)) return string.Concat(g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_category_redirect, 2, out g)) return string.Concat("CategoryView,category,", g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_monthlyarchive_redirect, 3, out g)) return string.Concat("default,month,", g[1].Value, "-", g[2], ".aspx"); return string.Empty; } static bool RunRegExOnIt(string toRegEx, string pattern, int groupCount, out GroupCollection g) { if (pattern.Length == 0) { g = null; return false; } g = new Regex(pattern, RegexOptions.IgnoreCase | RegexOptions.Compiled).Match(toRegEx).Groups; return (g.Count == groupCount); } Comments about this post welcome at the original blog.

    Read the article

  • Improving WIF&rsquo;s Claims-based Authorization - Part 2

    - by Your DisplayName here!
    In the last post I showed you how to take control over the invocation of ClaimsAuthorizationManager. Then you have complete freedom over the claim types, the amount of claims and the values. In addition I added two attributes that invoke the authorization manager using an “application claim type”. This way it is very easy to distinguish between authorization calls that originate from WIF’s per-request authorization and the ones from “within” you application. The attribute comes in two flavours: a CAS attribute (invoked by the CLR) and an ASP.NET MVC attribute (for MVC controllers, invoke by the MVC plumbing). Both also feature static methods to easily call them using the application claim types. The CAS attribute is part of Thinktecture.IdentityModel on Codeplex (or via NuGet: Install-Package Thinktecture.IdentityModel). If you really want to see that code ;) There is also a sample included in the Codeplex donwload. The MVC attribute is currently used in Thinktecture.IdentityServer – and I don’t currently plan to make it part of the library project since I don’t want to add a dependency on MVC for now. You can find the code below – and I will write about its usage in a follow-up post. public class ClaimsAuthorize : AuthorizeAttribute {     private string _resource;     private string _action;     private string[] _additionalResources;     /// <summary>     /// Default action claim type.     /// </summary>     public const string ActionType = "http://application/claims/authorization/action";     /// <summary>     /// Default resource claim type     /// </summary>     public const string ResourceType = "http://application/claims/authorization/resource";     /// <summary>     /// Additional resource claim type     /// </summary>     public const string AdditionalResourceType = "http://application/claims/authorization/additionalresource"          public ClaimsAuthorize(string action, string resource, params string[] additionalResources)     {         _action = action;         _resource = resource;         _additionalResources = additionalResources;     }     public static bool CheckAccess(       string action, string resource, params string[] additionalResources)     {         return CheckAccess(             Thread.CurrentPrincipal as IClaimsPrincipal,             action,             resource,             additionalResources);     }     public static bool CheckAccess(       IClaimsPrincipal principal, string action, string resource, params string[] additionalResources)     {         var context = CreateAuthorizationContext(             principal,             action,             resource,             additionalResources);         return ClaimsAuthorization.CheckAccess(context);     }     protected override bool AuthorizeCore(HttpContextBase httpContext)     {         return CheckAccess(_action, _resource, _additionalResources);     }     private static WIF.AuthorizationContext CreateAuthorizationContext(       IClaimsPrincipal principal, string action, string resource, params string[] additionalResources)     {         var actionClaims = new Collection<Claim>         {             new Claim(ActionType, action)         };         var resourceClaims = new Collection<Claim>         {             new Claim(ResourceType, resource)         };         if (additionalResources != null && additionalResources.Length > 0)         {             additionalResources.ToList().ForEach(ar => resourceClaims.Add(               new Claim(AdditionalResourceType, ar)));         }         return new WIF.AuthorizationContext(             principal,             resourceClaims,             actionClaims);     } }

    Read the article

  • Create PDF document using iTextSharp in ASP.Net 4.0 and MemoryMappedFile

    - by sreejukg
    In this article I am going to demonstrate how ASP.Net developers can programmatically create PDF documents using iTextSharp. iTextSharp is a software component, that allows developers to programmatically create or manipulate PDF documents. Also this article discusses the process of creating in-memory file, read/write data from/to the in-memory file utilizing the new feature MemoryMappedFile. I have a database of users, where I need to send a notice to all my users as a PDF document. The sending mail part of it is not covered in this article. The PDF document will contain the company letter head, to make it more official. I have a list of users stored in a database table named “tblusers”. For each user I need to send customized message addressed to them personally. The database structure for the users is give below. id Title Full Name 1 Mr. Sreeju Nair K. G. 2 Dr. Alberto Mathews 3 Prof. Venketachalam Now I am going to generate the pdf document that contains some message to the user, in the following format. Dear <Title> <FullName>, The message for the user. Regards, Administrator Also I have an image, bg.jpg that contains the background for the document generated. I have created .Net 4.0 empty web application project named “iTextSharpSample”. First thing I need to do is to download the iTextSharp dll from the source forge. You can find the url for the download here. http://sourceforge.net/projects/itextsharp/files/ I have extracted the Zip file and added the itextsharp.dll as a reference to my project. Also I have added a web form named default.aspx to my project. After doing all this, the solution explorer have the following view. In the default.aspx page, I inserted one grid view and associated it with a SQL Data source control that bind data from tblusers. I have added a button column in the grid view with text “generate pdf”. The output of the page in the browser is as follows. Now I am going to create a pdf document when the user clicking on the Generate PDF button. As I mentioned before, I am going to work with the file in memory, I am not going to create a file in the disk. I added an event handler for button by specifying onrowcommand event handler. My gridview source looks like <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1" Width="481px" CellPadding="4" ForeColor="#333333" GridLines="None" onrowcommand="Generate_PDF" > ………………………………………………………………………….. ………………………………………………………………………….. </asp:GridView> In the code behind, I wrote the corresponding event handler. protected void Generate_PDF(object sender, GridViewCommandEventArgs e) { // The button click event handler code. // I am going to explain the code for this section in the remaining part of the article } The Generate_PDF method is straight forward, It get the title, fullname and message to some variables, then create the pdf using these variables. The code for getting data from the grid view is as follows // get the row index stored in the CommandArgument property int index = Convert.ToInt32(e.CommandArgument); // get the GridViewRow where the command is raised GridViewRow selectedRow = ((GridView)e.CommandSource).Rows[index]; string title = selectedRow.Cells[1].Text; string fullname = selectedRow.Cells[2].Text; string msg = @"There are some changes in the company policy, due to this matter you need to submit your latest address to us. Please update your contact details / personnal details by visiting the member area of the website. ................................... "; since I don’t want to save the file in the disk, I am going the new feature introduced in .Net framework 4, called Memory-Mapped Files. Using Memory-Mapped mapped file, you can created non-persisted memory mapped files, that are not associated with a file in a disk. So I am going to create a temporary file in memory, add the pdf content to it, then write it to the output stream. To read more about MemoryMappedFile, read this msdn article http://msdn.microsoft.com/en-us/library/dd997372.aspx The below portion of the code using MemoryMappedFile object to create a test pdf document in memory and perform read/write operation on file. The CreateViewStream() object will give you a stream that can be used to read or write data to/from file. The code is very straight forward and I included comment so that you can understand the code. using (MemoryMappedFile mmf = MemoryMappedFile.CreateNew("test1.pdf", 1000000)) { // Create a new pdf document object using the constructor. The parameters passed are document size, left margin, right margin, top margin and bottom margin. iTextSharp.text.Document d = new iTextSharp.text.Document(PageSize.A4, 72,72,172,72); //get an instance of the memory mapped file to stream object so that user can write to this using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { // associate the document to the stream. PdfWriter.GetInstance(d, stream); /* add an image as bg*/ iTextSharp.text.Image jpg = iTextSharp.text.Image.GetInstance(Server.MapPath("Image/bg.png")); jpg.Alignment = iTextSharp.text.Image.UNDERLYING; jpg.SetAbsolutePosition(0, 0); //this is the size of my background letter head image. the size is in points. this will fit to A4 size document. jpg.ScaleToFit(595, 842); d.Open(); d.Add(jpg); d.Add(new Paragraph(String.Format("Dear {0} {1},", title, fullname))); d.Add(new Paragraph("\n")); d.Add(new Paragraph(msg)); d.Add(new Paragraph("\n")); d.Add(new Paragraph(String.Format("Administrator"))); d.Close(); } //read the file data byte[] b; using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { BinaryReader rdr = new BinaryReader(stream); b = new byte[mmf.CreateViewStream().Length]; rdr.Read(b, 0, (int)mmf.CreateViewStream().Length); } Response.Clear(); Response.ContentType = "Application/pdf"; Response.BinaryWrite(b); Response.End(); } Press ctrl + f5 to run the application. First I got the user list. Click on the generate pdf icon. The created looks as follows. Summary: Creating pdf document using iTextSharp is easy. You will get lot of information while surfing the www. Some useful resources and references are mentioned below http://itextsharp.com/ http://www.mikesdotnetting.com/Article/82/iTextSharp-Adding-Text-with-Chunks-Phrases-and-Paragraphs http://somewebguy.wordpress.com/2009/05/08/itextsharp-simplify-your-html-to-pdf-creation/ Hope you enjoyed the article.

    Read the article

  • What's up with LDoms: Part 4 - Virtual Networking Explained

    - by Stefan Hinker
    I'm back from my summer break (and some pressing business that kept me away from this), ready to continue with Oracle VM Server for SPARC ;-) In this article, we'll have a closer look at virtual networking.  Basic connectivity as we've seen it in the first, simple example, is easy enough.  But there are numerous options for the virtual switches and virtual network ports, which we will discuss in more detail now.   In this section, we will concentrate on virtual networking - the capabilities of virtual switches and virtual network ports - only.  Other options involving hardware assignment or redundancy will be covered in separate sections later on. There are two basic components involved in virtual networking for LDoms: Virtual switches and virtual network devices.  The virtual switch should be seen just like a real ethernet switch.  It "runs" in the service domain and moves ethernet packets back and forth.  A virtual network device is plumbed in the guest domain.  It corresponds to a physical network device in the real world.  There, you'd be plugging a cable into the network port, and plug the other end of that cable into a switch.  In the virtual world, you do the same:  You create a virtual network device for your guest and connect it to a virtual switch in a service domain.  The result works just like in the physical world, the network device sends and receives ethernet packets, and the switch does all those things ethernet switches tend to do. If you look at the reference manual of Oracle VM Server for SPARC, there are numerous options for virtual switches and network devices.  Don't be confused, it's rather straight forward, really.  Let's start with the simple case, and work our way to some more sophisticated options later on.  In many cases, you'll want to have several guests that communicate with the outside world on the same ethernet segment.  In the real world, you'd connect each of these systems to the same ethernet switch.  So, let's do the same thing in the virtual world: root@sun # ldm add-vsw net-dev=nxge2 admin-vsw primary root@sun # ldm add-vnet admin-net admin-vsw mars root@sun # ldm add-vnet admin-net admin-vsw venus We've just created a virtual switch called "admin-vsw" and connected it to the physical device nxge2.  In the physical world, we'd have powered up our ethernet switch and installed a cable between it and our big enterprise datacenter switch.  We then created a virtual network interface for each one of the two guest systems "mars" and "venus" and connected both to that virtual switch.  They can now communicate with each other and with any system reachable via nxge2.  If primary were running Solaris 10, communication with the guests would not be possible.  This is different with Solaris 11, please see the Admin Guide for details.  Note that I've given both the vswitch and the vnet devices some sensible names, something I always recommend. Unless told otherwise, the LDoms Manager software will automatically assign MAC addresses to all network elements that need one.  It will also make sure that these MAC addresses are unique and reuse MAC addresses to play nice with all those friendly DHCP servers out there.  However, if we want to do this manually, we can also do that.  (One reason might be firewall rules that work on MAC addresses.)  So let's give mars a manually assigned MAC address: root@sun # ldm set-vnet mac-addr=0:14:4f:f9:c4:13 admin-net mars Within the guest, these virtual network devices have their own device driver.  In Solaris 10, they'd appear as "vnet0".  Solaris 11 would apply it's usual vanity naming scheme.  We can configure these interfaces just like any normal interface, give it an IP-address and configure sophisticated routing rules, just like on bare metal.  In many cases, using Jumbo Frames helps increase throughput performance.  By default, these interfaces will run with the standard ethernet MTU of 1500 bytes.  To change this,  it is usually sufficient to set the desired MTU for the virtual switch.  This will automatically set the same MTU for all vnet devices attached to that switch.  Let's change the MTU size of our admin-vsw from the example above: root@sun # ldm set-vsw mtu=9000 admin-vsw primary Note that that you can set the MTU to any value between 1500 and 16000.  Of course, whatever you set needs to be supported by the physical network, too. Another very common area of network configuration is VLAN tagging. This can be a little confusing - my advise here is to be very clear on what you want, and perhaps draw a little diagram the first few times.  As always, keeping a configuration simple will help avoid errors of all kind.  Nevertheless, VLAN tagging is very usefull to consolidate different networks onto one physical cable.  And as such, this concept needs to be carried over into the virtual world.  Enough of the introduction, here's a little diagram to help in explaining how VLANs work in LDoms: Let's remember that any VLANs not explicitly tagged have the default VLAN ID of 1. In this example, we have a vswitch connected to a physical network that carries untagged traffic (VLAN ID 1) as well as VLANs 11, 22, 33 and 44.  There might also be other VLANs on the wire, but the vswitch will ignore all those packets.  We also have two vnet devices, one for mars and one for venus.  Venus will see traffic from VLANs 33 and 44 only.  For VLAN 44, venus will need to configure a tagged interface "vnet44000".  For VLAN 33, the vswitch will untag all incoming traffic for venus, so that venus will see this as "normal" or untagged ethernet traffic.  This is very useful to simplify guest configuration and also allows venus to perform Jumpstart or AI installations over this network even if the Jumpstart or AI server is connected via VLAN 33.  Mars, on the other hand, has full access to untagged traffic from the outside world, and also to VLANs 11,22 and 33, but not 44.  On the command line, we'd do this like this: root@sun # ldm add-vsw net-dev=nxge2 pvid=1 vid=11,22,33,44 admin-vsw primary root@sun # ldm add-vnet admin-net pvid=1 vid=11,22,33 admin-vsw mars root@sun # ldm add-vnet admin-net pvid=33 vid=44 admin-vsw venus Finally, I'd like to point to a neat little option that will make your live easier in all those cases where configurations tend to change over the live of a guest system.  It's the "id=<somenumber>" option available for both vswitches and vnet devices.  Normally, Solaris in the guest would enumerate network devices sequentially.  However, it has ways of remembering this initial numbering.  This is good in the physical world.  In the virtual world, whenever you unbind (aka power off and disassemble) a guest system, remove and/or add network devices and bind the system again, chances are this numbering will change.  Configuration confusion will follow suit.  To avoid this, nail down the initial numbering by assigning each vnet device it's device-id explicitly: root@sun # ldm add-vnet admin-net id=1 admin-vsw venus Please consult the Admin Guide for details on this, and how to decipher these network ids from Solaris running in the guest. Thanks for reading this far.  Links for further reading are essentially only the Admin Guide and Reference Manual and can be found above.  I hope this is useful and, as always, I welcome any comments.

    Read the article

  • HDMI video connection cuts top and bottom borders of screen

    - by Luis Alvarado
    Ok this is an extension of another problem I had with a VGA connection and an Nvidia Geforce GT 440 card. Here is goes the explanation of this particular problem: I have a Soneview 32' TV. This TV has many connections including VGA (First reason I bought it), HDMI (Second reason but did not have a HDMI cable at that time) and DVI. I have had this TV for little over a month now, actually I had it to celebrate the release of Ubuntu 11.10 and started using it exactly on that date (I know too much fan there but hey, I like geek stuff). I started using it with the VGA cable. After 2 weeks I bought an Nvidia GT440 card. The previous 9500GT was working correctly with no problems whatsoever. I installed the GT440 and the first problem that I encountered using this latest card is mentioned here: Nvidia GT 440 black screen problem when loading lightdm greeter. The solution to this problem was to actually disconnect then connect again the VGA cable. This would result in the screen showing me the lightdm screen for my login. If I did not disconnect then connect the cable I could be there forever thinking that there is no video signal. I got tired of looking for answers that did not work and for solutions that made me literally have to install Ubuntu again. I just went and bought a HDMI cable and changed the VGA one for that one. It worked and I did not have to disconnect/connect the cable but now I have this problem when using any resolution. My normal resolution is 1920x1080 (This TV is 1080HD) so in VGA I could use this resolution with no problem, but on HDMI am getting the borders cut out. Here is a pic: As you can see from the PIC, the Launcher icons only show less than 50% of their witdh. Forget about the top and bottom parts, I can access them with the mouse but I can not visualize them in the screen. It is like it's outside of the TVs view. Basically there is like 20 to 30 pixels gone from all sides. I searched around and came to running xrand --verbose to see what it could detect from the TV. I got this: cyrex@cyrex:~$ xrandr --verbose xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 175, current 1920 x 1080, maximum 1920 x 1080 default connected 1920x1080+0+0 (0x164) normal (normal) 0mm x 0mm Identifier: 0x163 Timestamp: 465485 Subpixel: unknown Clones: CRTC: 0 CRTCs: 0 Transform: 1.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 1.000000 filter: 1920x1080 (0x164) 103.7MHz *current h: width 1920 start 0 end 0 total 1920 skew 0 clock 54.0KHz v: height 1080 start 0 end 0 total 1080 clock 50.0Hz 1920x1080 (0x165) 105.8MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 55.1KHz v: height 1080 start 0 end 0 total 1080 clock 51.0Hz 1920x1080 (0x166) 107.8MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 56.2KHz v: height 1080 start 0 end 0 total 1080 clock 52.0Hz 1920x1080 (0x167) 109.9MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 57.2KHz v: height 1080 start 0 end 0 total 1080 clock 53.0Hz 1920x1080 (0x168) 112.0MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 58.3KHz v: height 1080 start 0 end 0 total 1080 clock 54.0Hz 1920x1080 (0x169) 114.0MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 59.4KHz v: height 1080 start 0 end 0 total 1080 clock 55.0Hz 1680x1050 (0x16a) 98.8MHz h: width 1680 start 0 end 0 total 1680 skew 0 clock 58.8KHz v: height 1050 start 0 end 0 total 1050 clock 56.0Hz 1680x1050 (0x16b) 100.5MHz h: width 1680 start 0 end 0 total 1680 skew 0 clock 59.9KHz v: height 1050 start 0 end 0 total 1050 clock 57.0Hz 1600x1024 (0x16c) 95.0MHz h: width 1600 start 0 end 0 total 1600 skew 0 clock 59.4KHz v: height 1024 start 0 end 0 total 1024 clock 58.0Hz 1440x900 (0x16d) 76.5MHz h: width 1440 start 0 end 0 total 1440 skew 0 clock 53.1KHz v: height 900 start 0 end 0 total 900 clock 59.0Hz 1360x768 (0x171) 65.8MHz h: width 1360 start 0 end 0 total 1360 skew 0 clock 48.4KHz v: height 768 start 0 end 0 total 768 clock 63.0Hz 1360x768 (0x172) 66.8MHz h: width 1360 start 0 end 0 total 1360 skew 0 clock 49.2KHz v: height 768 start 0 end 0 total 768 clock 64.0Hz 1280x1024 (0x173) 85.2MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 66.6KHz v: height 1024 start 0 end 0 total 1024 clock 65.0Hz 1280x960 (0x176) 83.6MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 65.3KHz v: height 960 start 0 end 0 total 960 clock 68.0Hz 1280x960 (0x177) 84.8MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 66.2KHz v: height 960 start 0 end 0 total 960 clock 69.0Hz 1280x720 (0x178) 64.5MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 50.4KHz v: height 720 start 0 end 0 total 720 clock 70.0Hz 1280x720 (0x179) 65.4MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 51.1KHz v: height 720 start 0 end 0 total 720 clock 71.0Hz 1280x720 (0x17a) 66.4MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 51.8KHz v: height 720 start 0 end 0 total 720 clock 72.0Hz 1152x864 (0x17b) 72.7MHz h: width 1152 start 0 end 0 total 1152 skew 0 clock 63.1KHz v: height 864 start 0 end 0 total 864 clock 73.0Hz 1152x864 (0x17c) 73.7MHz h: width 1152 start 0 end 0 total 1152 skew 0 clock 63.9KHz v: height 864 start 0 end 0 total 864 clock 74.0Hz ....Many Resolutions later... 320x200 (0x1d1) 10.2MHz h: width 320 start 0 end 0 total 320 skew 0 clock 31.8KHz v: height 200 start 0 end 0 total 200 clock 159.0Hz 320x175 (0x1d2) 9.0MHz h: width 320 start 0 end 0 total 320 skew 0 clock 28.0KHz v: height 175 start 0 end 0 total 175 clock 160.0Hz 1920x1080 (0x1dd) 333.8MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 173.9KHz v: height 1080 start 0 end 0 total 1080 clock 161.0Hz If it helps, the Refresh Rate at 1920x1080 is 60. There is a flickering effect at this resolution using HDMI but not VGA which I imagine is related to the borders cut off issue am asking here. I have also done the following but this will only solve the problem on lower resolutions than 1920x1080 or on others TV (My father has a Sony TV where this problem is also solved): NVIDIA WAY Go to Nvidia-Settings and there will be an option that will have more features if a HDMI cable is connected. In the next pic the option is DFP-1 (CNDLCD) but this name changes depending on what device the PC is connected to: Uncheck Force Full GPU Scaling What this will do for resolutions LOWER than 1920x1080 (At least in my case) is solve the flickering problem and fix the borders cut by the monitor. Save to Xorg.conf file the changes made after changing to a resolution acceptable to your eyes. TV WAY If you TV has OSD Menu and this menu has options for scanning the screen resolution or auto adjusting to it, disable them. Specifically the option about SCAN. If you have an option for AV Mode disable it. Basically disable any option that needs to scan and scale the resolution. Test one by one. In the case of my father's TV this did it. In my case, the Nvidia solved it for lower resolutions. NOTE: In the case this is not solved in the next couple of weeks I will add this as the answer but take into consideration that the issue is still active with 1920x1080 resolutions.

    Read the article

  • EM12c Release 4: New EMCLI Verbs

    - by SubinDaniVarughese
    Here are the new EM CLI verbs in Enterprise Manager 12c Release 4 (12.1.0.4). This helps you in writing new scripts or enhancing your existing scripts for further automation. Basic Administration Verbs invoke_ws - Invoke EM web service.ADM Verbs associate_target_to_adm - Associate a target to an application data model. export_adm - Export Application Data Model to a specified .xml file. import_adm - Import Application Data Model from a specified .xml file. list_adms - List the names, target names and application suites of existing Application Data Models verify_adm - Submit an application data model verify job for the target specified.Agent Update Verbs get_agent_update_status -  Show Agent Update Results get_not_updatable_agents - Shows Not Updatable Agents get_updatable_agents - Show Updatable Agents update_agents - Performs Agent Update Prereqs and submits Agent Update JobBI Publisher Reports Verbs grant_bipublisher_roles - Grants access to the BI Publisher catalog and features. revoke_bipublisher_roles - Revokes access to the BI Publisher catalog and features.Blackout Verbs create_rbk - Create a Retro-active blackout.CFW Verbs cancel_cloud_service_requests -  To cancel cloud service requests delete_cloud_service_instances -  To delete cloud service instances delete_cloud_user_objects - To delete cloud user objects. get_cloud_service_instances - To get information about cloud service instances get_cloud_service_requests - To get information about cloud requests get_cloud_user_objects - To get information about cloud user objects.Chargeback Verbs add_chargeback_entity - Adds the given entity to Chargeback. assign_charge_plan - Assign a plan to a chargeback entity. assign_cost_center - Assign a cost center to a chargeback entity. create_charge_entity_type - Create  charge entity type export_charge_plans - Exports charge plans metadata to file export_custom_charge_items -  Exports user defined charge items to a file import_charge_plans - Imports charge plans metadata from given file import_custom_charge_items -  Imports user defined charge items metadata from given file list_charge_plans - Gives a list of charge plans in Chargeback. list_chargeback_entities - Gives a list of all the entities in Chargeback list_chargeback_entity_types - Gives a list of all the entity types that are supported in Chargeback list_cost_centers - Lists the cost centers in Chargeback. remove_chargeback_entity - Removes the given entity from Chargeback. unassign_charge_plan - Un-assign the plan associated to a chargeback entity. unassign_cost_center - Un-assign the cost center associated to a chargeback entity.Configuration/Association History disable_config_history - Disable configuration history computation for a target type. enable_config_history - Enable configuration history computation for a target type. set_config_history_retention_period - Sets the amount of time for which Configuration History is retained.ConfigurationCompare config_compare - Submits the configuration comparison job get_config_templates - Gets all the comparison templates from the repositoryCompliance Verbs fix_compliance_state -  Fix compliance state by removing references in deleted targets.Credential Verbs update_credential_setData Subset Verbs export_subset_definition - Exports specified subset definition as XML file at specified directory path. generate_subset - Generate subset using specified subset definition and target database. import_subset_definition - Import a subset definition from specified XML file. import_subset_dump - Imports dump file into specified target database. list_subset_definitions - Get the list of subset definition, adm and target nameDelete pluggable Database Job Verbs delete_pluggable_database - Delete a pluggable databaseDeployment Procedure Verbs get_runtime_data - Get the runtime data of an executionDiscover and Push to Agents Verbs generate_discovery_input - Generate Discovery Input file for discovering Auto-Discovered Domains refresh_fa - Refresh Fusion Instance run_fa_diagnostics - Run Fusion Applications DiagnosticsFusion Middleware Provisioning Verbs create_fmw_domain_profile - Create a Fusion Middleware Provisioning Profile from a WebLogic Domain create_fmw_home_profile - Create a Fusion Middleware Provisioning Profile from an Oracle Home create_inst_media_profile - Create a Fusion Middleware Provisioning Profile from Installation MediaGold Agent Image Verbs create_gold_agent_image - Creates a gold agent image. decouple_gold_agent_image - Decouples the agent from gold agent image. delete_gold_agent_image - Deletes a gold agent image. get_gold_agent_image_activity_status -  Gets gold agent image activity status. get_gold_agent_image_details - Get the gold agent image details. list_agents_on_gold_image - Lists agents on a gold agent image. list_gold_agent_image_activities - Lists gold agent image activities. list_gold_agent_image_series - Lists gold agent image series. list_gold_agent_images - Lists the available gold agent images. promote_gold_agent_image - Promotes a gold agent image. stage_gold_agent_image - Stages a gold agent image.Incident Rules Verbs add_target_to_rule_set - Add a target to an enterprise rule set. delete_incident_record - Delete one or more open incidents remove_target_from_rule_set - Remove a target from an enterprise rule set. Job Verbs export_jobs - Export job details in to an xml file import_jobs - Import job definitions from an xml file job_input_file - Supply details for a job verb in a property file resume_job - Resume a job or set of jobs suspend_job - Suspend a job or set of jobs Oracle Database as Service Verbs config_db_service_target - Configure DB Service target for OPCPrivilege Delegation Settings Verbs clear_default_privilege_delegation_setting - Clears the default privilege delegation setting for a given list of platforms set_default_privilege_delegation_setting - Sets the default privilege delegation setting for a given list of platforms test_privilege_delegation_setting - Tests a Privilege Delegation Setting on a hostSSA Verbs cleanup_dbaas_requests - Submit cleanup request for failed request create_dbaas_quota - Create Database Quota for a SSA User Role create_service_template - Create a Service Template delete_dbaas_quota - Delete the Database Quota setup for a SSA User Role delete_service_template - Delete a given service template get_dbaas_quota - List the Database Quota setup for all SSA User Roles get_dbaas_request_settings - List the Database Request Settings get_service_template_detail - Get details of a given service template get_service_templates -  Get the list of available service templates rename_service_template -  Rename a given service template update_dbaas_quota - Update the Database Quota for a SSA User Role update_dbaas_request_settings - Update the Database Request Settings update_service_template -  Update a given service template. SavedConfigurations get_saved_configs  - Gets the saved configurations from the repository Server Generated Alert Metric Verbs validate_server_generated_alerts  - Server Generated Alert Metric VerbServices Verbs edit_sl_rule - Edit the service level rule for the specified serviceSiebel Verbs list_siebel_enterprises -  List Siebel enterprises currently monitored in EM list_siebel_servers -  List Siebel servers under a specified siebel enterprise update_siebel- Update a Siebel enterprise or its underlying serversSiteGuard Verbs add_siteguard_aux_hosts -  Associate new auxiliary hosts to the system configure_siteguard_lag -  Configure apply lag and transport lag limit for databases delete_siteguard_aux_host -  Delete auxiliary host associated with a site delete_siteguard_lag -  Erases apply lag or transport lag limit for databases get_siteguard_aux_hosts -  Get all auxiliary hosts associated with a site get_siteguard_health_checks -  Shows schedule of health checks get_siteguard_lag -  Shows apply lag or transport lag limit for databases schedule_siteguard_health_checks -  Schedule health checks for an operation plan stop_siteguard_health_checks -  Stops all future health check execution of an operation plan update_siteguard_lag -  Updates apply lag and transport lag limit for databasesSoftware Library Verbs stage_swlib_entity_files -  Stage files of an entity from Software Library to a host target.Target Data Verbs create_assoc - Creates target associations delete_assoc - Deletes target associations list_allowed_pairs - Lists allowed association types for specified source and destination list_assoc - Lists associations between source and destination targets manage_agent_partnership - Manages partnership between agents. Used for explicitly assigning agent partnershipsTrace Reports generate_ui_trace_report  -  Generate and download UI Page performance report (to identify slow rendering pages)VI EMCLI Verbs add_virtual_platform - Add Oracle Virtual PLatform(s). modify_virtual_platform - Modify Oracle Virtual Platform.To get more details about each verb, execute$ emcli help <verb_name>Example: $ emcli help list_assocNew resources in list verbThese are the new resources in EM CLI list verb :Certificates  WLSCertificateDetails Credential Resource Group  PreferredCredentialsDefaultSystemScope - Preferred credentials (System Scope)   PreferredCredentialsSystemScope - Target preferred credentialPrivilege Delegation Settings  TargetPrivilegeDelegationSettingDetails  - List privilege delegation setting details on a host  TargetPrivilegeDelegationSetting - List privilege delegation settings on a host   PrivilegeDelegationSettings  - Lists all Privilege Delegation Settings   PrivilegeDelegationSettingDetails - Lists details of  Privilege Delegation Settings To get more details about each resource, execute$ emcli list -resource="<resource_name>" -helpExample: $ emcli list -resource="PrivilegeDelegationSettings" -helpDeprecated Verbs:Agent Administration Verbs resecure_agent - Resecure an agentTo get the complete list of verbs, execute:$ emcli help Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • Customize Team Build 2010 – Part 16: Specify the relative reference path

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application Part 16: Specify the relative reference path As I have already blogged about, it is not intuitive how to specify the paths where the build server has to look for references that are stored in Source Control. It is a common practice to store 3rd party libraries in Source Control, so they are available to everyone, everyone uses the same version of the libraries and updating a library can be done centrally. In Team Build 2010 these paths are specified as a parameter for MSBuild. What we will do in this post is building the values for this parameter based on the values in an argument. You are now pretty aware how to customize the build template, so let’s do the modifications in another way. Instead of opening the xaml file in the workflow designer, we open it in the XML editor. You can open it in the XML Editor by either selecting the Open with menu (see the context menu), or by choosing the View code option. To add this functionality we need to: Specify a new argument Add the argument to the metadata Build the absolute paths for the references and add these paths to the MSBuild arguments 1. Specify a new argument Locate at the top of the document the Members (which are the arguments) of the XAML and add the following line <x:Property Name="ReferencePaths" Type="InArgument(s:String[])" /> 2. Add the argument to the metadata Then locate the line <mtbw:ProcessParameterMetadataCollection> and paste the following line <mtbw:ProcessParameterMetadata Category="Misc" Description="The list of reference paths, relative to the root path in the Workspace mapping." DisplayName="Reference paths" ParameterName="ReferencePaths" /> 3. Build the absolute paths for the references and add these paths to the MSBuild arguments Now locate the place where the assignments are done to the variables used in the agent. And add the following lines after the last Assign activity         <Sequence DisplayName="Initialize ReferencePath" sap:VirtualizedContainerService.HintSize="464,428">           <Sequence.Variables>             <Variable x:TypeArguments="x:String" Name="ReferencePathsArgument">               <Variable.Default>                 <Literal x:TypeArguments="x:String" Value="" />               </Variable.Default>             </Variable>           </Sequence.Variables>           <sap:WorkflowViewStateService.ViewState>             <scg:Dictionary x:TypeArguments="x:String, x:Object">               <x:Boolean x:Key="IsExpanded">True</x:Boolean>             </scg:Dictionary>           </sap:WorkflowViewStateService.ViewState>           <ForEach x:TypeArguments="x:String" DisplayName="Iterate through the paths" sap:VirtualizedContainerService.HintSize="287,206" mtbwt:BuildTrackingParticipant.Importance="Low" Values="[ReferencePaths]">             <ActivityAction x:TypeArguments="x:String">               <ActivityAction.Argument>                 <DelegateInArgument x:TypeArguments="x:String" Name="path" />               </ActivityAction.Argument>               <Assign x:TypeArguments="x:String" DisplayName="Build ReferencePath argument" sap:VirtualizedContainerService.HintSize="257,100" mtbwt:BuildTrackingParticipant.Importance="Low"  To="[ReferencePathsArgument]" Value="[If(String.IsNullOrEmpty(ReferencePathsArgument), &quot;&quot;, ReferencePathsArgument + &quot;;&quot;) + IO.Path.Combine(SourcesDirectory, path)]" />             </ActivityAction>           </ForEach>           <Assign DisplayName="Append the reference paths to the MSBuild Arguments" sap:VirtualizedContainerService.HintSize="287,58">             <Assign.To>               <OutArgument x:TypeArguments="x:String">[MSBuildArguments]</OutArgument>             </Assign.To>             <Assign.Value>               <InArgument x:TypeArguments="x:String">[String.Format("{0} /p:ReferencePath=""{1}""", MSBuildArguments, ReferencePathsArgument)]</InArgument>             </Assign.Value>           </Assign>         </Sequence> Now you can use the template to specify the paths relative to SourcesDirectory. You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Anti-Forgery Request Helpers for ASP.NET MVC and jQuery AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent in the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> This invocation generates a token then writes inside the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and also writes into the cookie: __RequestVerificationToken_Lw__= J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. In the server side, [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, some problems are encountered. Specify validation on controller (not on each action) The server side problem is, It is expected to declare [ValidateAntiForgeryToken] on controller, but actually it has be to declared on each POST actions. Because POST actions are usually much more then controllers, this is a little crazy Problem Usually a controller contains actions for HTTP GET and actions for HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller // One [ValidateAntiForgeryToken] attribute. { [HttpGet] public ActionResult Index() // Index() cannot work. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If browser sends an HTTP GET request by clicking a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each POST action:public class SomeController : Controller // Many [ValidateAntiForgeryToken] attributes. { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } This is a little bit crazy, because one application can have a lot of POST actions. Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one for each POST action), the following ValidateAntiForgeryTokenAttribute wrapper class can be helpful, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // GET actions are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all POST actions. Maybe it would be nice if HTTP verbs can be specified on the built-in [ValidateAntiForgeryToken] attribute, which is easy to implemented. Submit token via AJAX The browser side problem is, if server side turns on anti-forgery validation for POST, then AJAX POST requests will fail be default. Problem For AJAX scenarios, when request is sent by jQuery instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The tokens are printed to browser then sent back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called somewhere. Now the browser has token in HTML and cookie. Then jQuery must find the printed token in the HTML, and append token to the data before sending:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated into a tiny jQuery plugin:/// <reference path="jquery-1.4.2.js" /> (function ($) { $.getAntiForgeryToken = function (tokenWindow, appPath) { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. tokenWindow = tokenWindow && typeof tokenWindow === typeof window ? tokenWindow : window; appPath = appPath && typeof appPath === "string" ? "_" + appPath.toString() : ""; // The name attribute is either __RequestVerificationToken, // or __RequestVerificationToken_{appPath}. tokenName = "__RequestVerificationToken" + appPath; // Finds the <input type="hidden" name={tokenName} value="..." /> from the specified. // var inputElements = $("input[type='hidden'][name='__RequestVerificationToken" + appPath + "']"); var inputElements = tokenWindow.document.getElementsByTagName("input"); for (var i = 0; i < inputElements.length; i++) { var inputElement = inputElements[i]; if (inputElement.type === "hidden" && inputElement.name === tokenName) { return { name: tokenName, value: inputElement.value }; } } return null; }; $.appendAntiForgeryToken = function (data, token) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } // Gets token from current window by default. token = token ? token : $.getAntiForgeryToken(); // $.getAntiForgeryToken(window). data = data ? data + "&" : ""; // If token exists, appends {token.name}={token.value} to data. return token ? data + encodeURIComponent(token.name) + "=" + encodeURIComponent(token.value) : data; }; // Wraps $.post(url, data, callback, type). $.postAntiForgery = function (url, data, callback, type) { return $.post(url, $.appendAntiForgeryToken(data), callback, type); }; // Wraps $.ajax(settings). $.ajaxAntiForgery = function (settings) { settings.data = $.appendAntiForgeryToken(settings.data); return $.ajax(settings); }; })(jQuery); In most of the scenarios, it is Ok to just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() with $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. There might be some scenarios of custom token. Here $.appendAntiForgeryToken() is provided:data = $.appendAntiForgeryToken(data, token); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); And there are scenarios that the token is not in the current window. For example, an HTTP POST request can be sent by iframe, while the token is in the parent window. Here window can be specified for $.getAntiForgeryToken():data = $.appendAntiForgeryToken(data, $.getAntiForgeryToken(window.parent)); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); If you have better solution, please do tell me.

    Read the article

  • Using Radio Button in GridView with Validation

    - by Vincent Maverick Durano
    A developer is asking how to select one radio button at a time if the radio button is inside the GridView.  As you may know setting the group name attribute of radio button will not work if the radio button is located within a Data Representation control like GridView. This because the radio button inside the gridview bahaves differentely. Since a gridview is rendered as table element , at run time it will assign different "name" to each radio button. Hence you are able to select multiple rows. In this post I'm going to demonstrate how select one radio button at a time in gridview and add a simple validation on it. To get started let's go ahead and fire up visual studio and the create a new web application / website project. Add a WebForm and then add gridview. The mark up would look something like this: <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="false" > <Columns> <asp:TemplateField> <ItemTemplate> <asp:RadioButton ID="rb" runat="server" /> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="RowNumber" HeaderText="Row Number" /> <asp:BoundField DataField="Col1" HeaderText="First Column" /> <asp:BoundField DataField="Col2" HeaderText="Second Column" /> </Columns> </asp:GridView> Noticed that I've added a templatefield column so that we can add the radio button there. Also I have set up some BoundField columns and set the DataFields as RowNumber, Col1 and Col2. These columns are just dummy columns and i used it for the simplicity of this example. Now where these columns came from? These columns are created by hand at the code behind file of the ASPX. Here's the code below: private DataTable FillData() { DataTable dt = new DataTable(); DataRow dr = null; //Create DataTable columns dt.Columns.Add(new DataColumn("RowNumber", typeof(string))); dt.Columns.Add(new DataColumn("Col1", typeof(string))); dt.Columns.Add(new DataColumn("Col2", typeof(string))); //Create Row for each columns dr = dt.NewRow(); dr["RowNumber"] = 1; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 2; dr["Col1"] = "AA"; dr["Col2"] = "BB"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 3; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 4; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 5; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); return dt; } And here's the code for binding the GridView with the dummy data above. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { GridView1.DataSource = FillData(); GridView1.DataBind(); } } Okay we have now a GridView data with a radio button on each row. Now lets go ahead and switch back to ASPX mark up. In this example I'm going to use a JavaScript for validating the radio button to select one radio button at a time. Here's the javascript code below: function CheckOtherIsCheckedByGVID(rb) { var isChecked = rb.checked; var row = rb.parentNode.parentNode; if (isChecked) { row.style.backgroundColor = '#B6C4DE'; row.style.color = 'black'; } var currentRdbID = rb.id; parent = document.getElementById("<%= GridView1.ClientID %>"); var items = parent.getElementsByTagName('input'); for (i = 0; i < items.length; i++) { if (items[i].id != currentRdbID && items[i].type == "radio") { if (items[i].checked) { items[i].checked = false; items[i].parentNode.parentNode.style.backgroundColor = 'white'; items[i].parentNode.parentNode.style.color = '#696969'; } } } } The function above sets the row of the current selected radio button's style to determine that the row is selected and then loops through the radio buttons in the gridview and then de-select the previous selected radio button and set the row style back to its default. You can then call the javascript function above at onlick event of radio button like below: <asp:RadioButton ID="rb" runat="server" onclick="javascript:CheckOtherIsCheckedByGVID(this);" /> Here's the output below: On Load: After Selecting a Radio Button: As you have noticed, on initial load there's no default selected radio in the GridView. Now let's add a simple validation for that. We will basically display an error message if a user clicks a button that triggers a postback without selecting  a radio button in the GridView. Here's the javascript for the validation: function ValidateRadioButton(sender, args) { var gv = document.getElementById("<%= GridView1.ClientID %>"); var items = gv.getElementsByTagName('input'); for (var i = 0; i < items.length ; i++) { if (items[i].type == "radio") { if (items[i].checked) { args.IsValid = true; return; } else { args.IsValid = false; } } } } The function above loops through the rows in gridview and find all the radio buttons within it. It will then check each radio button checked property. If a radio is checked then set IsValid to true else set it to false.  The reason why I'm using IsValid is because I'm using the ASP validator control for validation. Now add the following mark up below under the GridView declaration: <br /> <asp:Label ID="lblMessage" runat="server" /> <br /> <asp:Button ID="btn" runat="server" Text="POST" onclick="btn_Click" ValidationGroup="GroupA" /> <asp:CustomValidator ID="CustomValidator1" runat="server" ErrorMessage="Please select row in the grid." ClientValidationFunction="ValidateRadioButton" ValidationGroup="GroupA" style="display:none"></asp:CustomValidator> <asp:ValidationSummary ID="ValidationSummary1" runat="server" ValidationGroup="GroupA" HeaderText="Error List:" DisplayMode="BulletList" ForeColor="Red" /> And then at Button Click event add this simple code below just to test if  the validation works: protected void btn_Click(object sender, EventArgs e) { lblMessage.Text = "Postback at: " + DateTime.Now.ToString("hh:mm:ss tt"); } Here's the output below that you can see in the browser:   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,JavaScript,GridView

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • Simple way of converting server side objects into client side using JSON serialization for asp.net websites

    - by anil.kasalanati
     Introduction:- With the growth of Web2.0 and the need for faster user experience the spotlight has shifted onto javascript based applications built using REST pattern or asp.net AJAX Pagerequest manager. And when we are working with javascript wouldn’t it be much better if we could create objects in an OOAD way and easily push it to the client side.  Following are the reasons why you would push the server side objects onto client side -          Easy availability of the complex object. -          Use C# compiler and rick intellisense to create and maintain the objects but use them in the javascript. You could run code analysis etc. -          Reduce the number of calls we make to the server side by loading data on the pageload.   I would like to explain about the 3rd point because that proved to be highly beneficial to me when I was fixing the performance issues of a major website. There could be a scenario where in you be making multiple AJAX based webrequestmanager calls in order to get the same response in a single page. This happens in the case of widget based framework when all the widgets are independent but they need some common information available in the framework to load the data. So instead of making n multiple calls we could load the data needed during pageload. The above picture shows the scenario where in all the widgets need the common information and then call GetData webservice on the server side. Ofcourse the result can be cached on the client side but a better solution would be to avoid the call completely.  In order to do that we need to JSONSerialize the content and send it in the DOM.                                                                                                                                                                                                                                                                                                                                                                                            Example:- I have developed a simple application to demonstrate the idea and I would explaining that in detail here. The class called SimpleClass would be sent as serialized JSON to the client side .   And this inherits from the base class which has the implementation for the GetJSONString method. You can create a single base class and all the object which need to be pushed to the client side can inherit from that class. The important thing to note is that the class should be annotated with DataContract attribute and the methods should have the Data Member attribute. This is needed by the .Net DataContractSerializer and this follows the opt-in mode so if you want to send an attribute to the client side then you need to annotate the DataMember attribute. So if I didn’t want to send the Result I would simple remove the DataMember attribute. This is default WCF/.Net 3.5 stuff but it provides the flexibility of have a fullfledged object on the server side but sending a smaller object to the client side. Sometimes you may hide some values due to security constraints. And thing you will notice is that I have marked the class as Serializable so that it can be stored in the Session and used in webfarm deployment scenarios. Following is the implementation of the base class –  This implements the default DataContractJsonSerializer and for more information or customization refer to following blogs – http://softcero.blogspot.com/2010/03/optimizing-net-json-serializing-and-ii.html http://weblogs.asp.net/gunnarpeipman/archive/2010/12/28/asp-net-serializing-and-deserializing-json-objects.aspx The next part is pretty simple, I just need to inject this object into the aspx page.   And in the aspx markup I have the following line – <script type="text/javascript"> var data =(<%=SimpleClassJSON  %>);   alert(data.ResultText); </script>   This will output the content as JSON into the variable data and this can be any element in the DOM. And you can verify the element by checking data in the Firebug console.    Design Consideration – If you have a lot of javascripts then you need to think about using Script # and you can write javascript in C#. Refer to Nikhil’s blog – http://projects.nikhilk.net/ScriptSharp Ensure that you are taking security into consideration while exposing server side objects on to client side. I have seen application exposing passwords, secret key so it is not a good practice.   The application can be tested using the following url – http://techconsulting.vpscustomer.com/Samples/JsonTest.aspx The source code is available at http://techconsulting.vpscustomer.com/Source/HistoryTest.zip

    Read the article

  • Using Find/Replace with regular expressions inside a SSIS package

    - by jamiet
    Another one of those might-be-useful-again-one-day-so-I’ll-share-it-in-a-blog-post blog posts I am currently working on a SQL Server Integration Services (SSIS) 2012 implementation where each package contains a parameter called ETLIfcHist_ID: During normal execution this will get altered when the package is executed from the Execute Package Task however we want to make sure that at deployment-time they all have a default value of –1. Of course, they tend to get changed during development so I wanted a way of easily changing them all back to the default value. Opening up each package in turn and editing them was an option but given that we have over 40 packages and we might want to carry out this reset fairly frequently I needed a more automated method so I turned to Visual Studio’s Find/Replace… feature Of course, we don’t know what value will be in that parameter so I can’t simply search for a particular value; hence I opted to use a regular expression to identify the value to be change. In the rest of this blog post I’ll explain how to do that. For demonstration purposes I have taken the contents of a .dtsx file and stripped out everything except the element containing the parameters (<DTS:PackageParameters>), if you want to play along at home you can copy-paste the XML document below into a new XML file and open it up in Visual Studio: <?xml version="1.0"?> <DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts">   <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="Some other description"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25845C7E}"       DTS:ObjectName="SomeOtherObjectName">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">SomeOtherValue</DTS:Property>     </DTS:PackageParameter>   </DTS:PackageParameters> </DTS:Executable> We are trying to identify the value of the parameter whose name is ETLIfcHist_ID – notice that in the XML document above that value is VALUE_TO_BE_CHANGED. The following regular expression will find the appropriate portion of the XML document: {\<DTS\:PackageParameter[\n ]*DTS\:CreationName="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Description="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DTSID="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:ObjectName="ETLIfcHist_ID"\>[\n ]*\<DTS\:Property[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Name="ParameterValue"\>}[A-Za-z0-9\:_\{\}- ]*{\<\/DTS\:Property\>} I have highlighted the name of the parameter that we’re looking for. I have also highlighted two portions identified by pairs of curly braces “{…}”; these are important because they pick out the two portions either side of the value I want to replace, in other words the portions highlighted here: <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter> Those sections in the curly braces are termed tag expressions and can be identified in the replace expression using a backslash and a number identifying which tag expression you’re referring to according to its ordinal position. Hence, our replace expression is simply: \1-1\2 We’re saying the portion of our file identified by the regular expression should be replaced by the first curly brace section, then the literal –1, then the second curly brace section. Make sense? Give it a go yourself by plugging those two expressions into Visual Studio’s Find and Replace dialog. If you set it to look in “All Open Documents” then you can open up the code-behind of all your packages and change all of them at once. The Find and Replace dialog will look like this: That’s it! I realise that not everyone will be looking to change the value of a parameter but hopefully I have shown you a technique that you can modify to work for your own scenario. Given that this blog post is, y’know, on the web I have no doubt that someone is going to find a fault with my find regex expression and if that person is you….that’s OK. Let me know about it in the comments below and perhaps we can work together to come up with something better! Note that some parameters may have a different set of properties (for example some, but not all, of my parameters have a DTS:Required attribute) so your find regular expression may have to change accordingly. When researching this I found the following article to be invaluable: Visual Studio Find/Replace Regular Expression Usage @Jamiet

    Read the article

  • Networking in VirtualBox

    - by Fat Bloke
    Networking in VirtualBox is extremely powerful, but can also be a bit daunting, so here's a quick overview of the different ways you can setup networking in VirtualBox, with a few pointers as to which configurations should be used and when. VirtualBox allows you to configure up to 8 virtual NICs (Network Interface Controllers) for each guest vm (although only 4 are exposed in the GUI) and for each of these NICs you can configure: Which virtualized NIC-type is exposed to the Guest. Examples include: Intel PRO/1000 MT Server (82545EM),  AMD PCNet FAST III (Am79C973, the default) or  a Paravirtualized network adapter (virtio-net). How the NIC operates with respect to your Host's physical networking. The main modes are: Network Address Translation (NAT) Bridged networking Internal networking Host-only networking NAT with Port-forwarding The choice of NIC-type comes down to whether the guest has drivers for that NIC.  VirtualBox, suggests a NIC based on the guest OS-type that you specify during creation of the vm, and you rarely need to modify this. But the choice of networking mode depends on how you want to use your vm (client or server) and whether you want other machines on your network to see it. So let's look at each mode in a bit more detail... Network Address Translation (NAT) This is the default mode for new vm's and works great in most situations when the Guest is a "client" type of vm. (i.e. most network connections are outbound). Here's how it works: When the guest OS boots,  it typically uses DHCP to get an IP address. VirtualBox will field this DHCP request and tell the guest OS its assigned IP address and the gateway address for routing outbound connections. In this mode, every vm is assigned the same IP address (10.0.2.15) because each vm thinks they are on their own isolated network. And when they send their traffic via the gateway (10.0.2.2) VirtualBox rewrites the packets to make them appear as though they originated from the Host, rather than the Guest (running inside the Host). This means that the Guest will work even as the Host moves from network to network (e.g. laptop moving between locations), and from wireless to wired connections too. However, how does another computer initiate a connection into a Guest?  e.g. connecting to a web server running in the Guest. This is not (normally) possible using NAT mode as there is no route into the Guest OS. So for vm's running servers we need a different networking mode.... Bridged Networking Bridged Networking is used when you want your vm to be a full network citizen, i.e. to be an equal to your host machine on the network. In this mode, a virtual NIC is "bridged" to a physical NIC on your host, like this: The effect of this is that each VM has access to the physical network in the same way as your host. It can access any service on the network such as external DHCP services, name lookup services, and routing information just as the host does. Logically, the network looks like this: The downside of this mode is that if you run many vm's you can quickly run out of IP addresses or your network administrator gets fed up with you asking for statically assigned IP addresses. Secondly, if your host has multiple physical NICs (e.g. Wireless and Wired) you must reconfigure the bridge when your host jumps networks.  Hmm, so what if you want to run servers in vm's but don't want to involve your network administrator? Maybe one of the next 2 modes is for you... Internal Networking When you configure one or more vm's to sit on an Internal network, VirtualBox ensures that all traffic on that network stays within the host and is only visible to vm's on that virtual network. Configuration looks like this: The internal network ( in this example "intnet" ) is a totally isolated network and so is very "quiet". This is good for testing when you need a separate, clean network, and you can create sophisticated internal networks with vm's that provide their own services to the internal network. (e.g. Active Directory, DHCP, etc). Note that not even the Host is a member of the internal network, but this mode allows vm's to function even when the Host is not connected to a network (e.g. on a plane). Note that in this mode, VirtualBox provides no "convenience" services such as DHCP, so your machines must be statically configured or one of the vm's needs to provide a DHCP/Name service. Multiple internal networks are possible and you can configure vm's to have multiple NICs to sit across internal and other network modes and thereby provide routes if needed. But all this sounds tricky. What if you want an Internal Network that the host participates on with VirtualBox providing IP addresses to the Guests? Ah, then for this, you might want to consider Host-only Networking... Host-only Networking Host-only Networking is like Internal Networking in that you indicate which network the Guest sits on, in this case, "vboxnet0": All vm's sitting on this "vboxnet0" network will see each other, and additionally, the host can see these vm's too. However, other external machines cannot see Guests on this network, hence the name "Host-only". Logically, the network looks like this: This looks very similar to Internal Networking but the host is now on "vboxnet0" and can provide DHCP services. To configure how a Host-only network behaves, look in the VirtualBox Manager...Preferences...Network dialog: Port-Forwarding with NAT Networking Now you may think that we've provided enough modes here to handle every eventuality but here's just one more... What if you cart around a mobile-demo or dev environment on, say, a laptop and you have one or more vm's that you need other machines to connect into? And you are continually hopping onto different (customer?) networks. In this scenario: NAT - won't work because external machines need to connect in. Bridged - possibly an option, but does your customer want you eating IP addresses and can your software cope with changing networks? Internal - we need the vm(s) to be visible on the network, so this is no good. Host-only - same problem as above, we want external machines to connect in to the vm's. Enter Port-forwarding to save the day! Configure your vm's to use NAT networking; Add Port Forwarding rules; External machines connect to "host":"port number" and connections are forwarded by VirtualBox to the guest:port number specified. For example, if your vm runs a web server on port 80, you could set up rules like this:  ...which reads: "any connections on port 8080 on the Host will be forwarded onto this vm's port 80".  This provides a mobile demo system which won't need re-configuring every time you open your laptop lid. Summary VirtualBox has a very powerful set of options allowing you to set up almost any configuration your heart desires. For more information, check out the VirtualBox User Manual on Virtual Networking. -FB 

    Read the article

  • Database continuous integration step by step

    - by David Atkinson
    This post will describe how to set up basic database continuous integration using TeamCity to initiate the build process, SQL Source Control to put your database under source control, and the SQL Compare command line to keep a test database up to date. In my example I will be using Subversion as my source control repository. If you wish to follow my steps verbatim, please make sure you have TortoiseSVN, SQL Compare and SQL Source Control installed. Downloading and Installing TeamCity TeamCity (http://www.jetbrains.com/teamcity/index.html) is free for up to three agents, so it a great no-risk tool you can use to experiment with. 1. Download the latest version from the JetBrains website. For some reason the TeamCity executable didn't download properly for me, stalling frustratingly at 99%, so I tried again with the zip file download option (see screenshot below), which worked flawlessly. 2. Run the installer using the defaults. This results in a set-up with the server component and agent installed on the same machine, which is ideal for getting started with ease. 3. Check that the build agent is pointing to the server correctly. This has caught me out a few times before. This setting is in C:\TeamCity\buildAgent\conf\buildAgent.properties and for my installation is serverUrl=http\://localhost\:80 . If you need to change this value, if for example you've had to install the Server console to a different port number, the TeamCity Build Agent Service will need to be restarted for the change to take effect. 4. Open the TeamCity admin console on http://localhost , and specify your own designated username and password at first startup. Putting your database in source control using SQL Source Control 5. Assuming you've got SQL Source Control installed, select a development database in the SQL Server Management Studio Object Explorer and select Link Database to Source Control. 6. For the Link step you can either create your own empty folder in source control, or you can select Just Evaluating, which just creates a local subversion repository for you behind the scenes. 7. Once linked, note that your database turns green in the Object Explorer. Visit the Commit tab to do an initial commit of your database objects by typing in an appropriate comment and clicking Commit. 8. There is a hidden feature in SQL Source Control that opens up TortoiseSVN (provided it is installed) pointing to the linked repository. Keep Shift depressed and right click on the text to the right of 'Linked to', in the example below, it's the red Evaluation Repository text. Select Open TortoiseSVN Repo Browser. This screen should give you an idea of how SQL Source Control manages the object files behind the scenes. Back in the TeamCity admin console, we'll now create a new project to monitor the above repository location and to trigger a 'build' each time the repository changes. 9. In TeamCity Adminstration, select Create Project and give it a name, such as "My first database CI", and click Create. 10. Click on Create Build Configuration, and name it something like "Integration build". 11. Click VCS settings and then Create And Attach new VCS root. This is where you will tell TeamCity about the repository it should monitor. 12. In my case since I'm using the Just Evaluating option in SQL Source Control, I should select Subversion. 13. In the URL field paste your repository location. In my case this is file:///C:/Users/David.Atkinson/AppData/Local/Red Gate/SQL Source Control 3/EvaluationRepositories/WidgetDevelopment/WidgetDevelopment 14. Click on Test Connection to ensure that you can communicate with your source control system. Click Save. 15. Click Add Build Step, and Runner Type: Command Line. Should you be familiar with the other runner types, such as NAnt, MSBuild or Powershell, you can opt for these, but for the same of keeping it simple I will pick the simplest option. 16. If you have installed SQL Compare in the default location, set the Command Executable field to: C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe 17. Flip back to SSMS briefly and add a new database to your server. This will be the database used for continuous integration testing. 18. Set the command parameters according to your server and the name of the database you have created. In my case I created database RedGateCI on server .\sql2008r2 /scripts1:. /server2:.\sql2008r2 /db2:RedGateCI /sync /verbose Note that if you pick a server instance that isn't on your local machine, you'll need the TCP/IP protocol enabled in SQL Server Configuration Manager otherwise the SQL Compare command line will not be able to connect. 19. Save and select Build Triggering / Add New Trigger / VCS Trigger. This is where you tell TeamCity when it should initiate a build. Click Save. 20. Now return to SQL Server Management Studio and make a schema change (eg add a new object) to your linked development database. A blue indicator will appear in the Object Explorer. Commit this change, typing in an appropriate check-in comment. All being good, within 60 seconds (a TeamCity default that can be changed) a build will be triggered. 21. Click on Projects in TeamCity to get back to the overview screen: The build log will show you the console output, which is useful for troubleshooting any issues: That's it! You now have continuous integration on your database. In future posts I'll cover how you can generate and test the database creation script, the database upgrade script, and run database unit tests as part of your continuous integration script. If you have any trouble getting this up and running please let me know, either by commenting on this post, or email me directly using the email address below. Technorati Tags: SQL Server

    Read the article

  • VS 2010 Debugger Improvements (BreakPoints, DataTips, Import/Export)

    - by ScottGu
    This is the twenty-first in a series of blog posts I’m doing on the VS 2010 and .NET 4 release.  Today’s blog post covers a few of the nice usability improvements coming with the VS 2010 debugger.  The VS 2010 debugger has a ton of great new capabilities.  Features like Intellitrace (aka historical debugging), the new parallel/multithreaded debugging capabilities, and dump debuging support typically get a ton of (well deserved) buzz and attention when people talk about the debugging improvements with this release.  I’ll be doing blog posts in the future that demonstrate how to take advantage of them as well.  With today’s post, though, I thought I’d start off by covering a few small, but nice, debugger usability improvements that were also included with the VS 2010 release, and which I think you’ll find useful. Breakpoint Labels VS 2010 includes new support for better managing debugger breakpoints.  One particularly useful feature is called “Breakpoint Labels” – it enables much better grouping and filtering of breakpoints within a project or across a solution.  With previous releases of Visual Studio you had to manage each debugger breakpoint as a separate item. Managing each breakpoint separately can be a pain with large projects and for cases when you want to maintain “logical groups” of breakpoints that you turn on/off depending on what you are debugging.  Using the new VS 2010 “breakpoint labeling” feature you can now name these “groups” of breakpoints and manage them as a unit. Grouping Multiple Breakpoints Together using a Label Below is a screen-shot of the breakpoints window within Visual Studio 2010.  This lists all of the breakpoints defined within my solution (which in this case is the ASP.NET MVC 2 code base): The first and last breakpoint in the list above breaks into the debugger when a Controller instance is created or released by the ASP.NET MVC Framework. Using VS 2010, I can now select these two breakpoints, right-click, and then select the new “Edit labels…” menu command to give them a common label/name (making them easier to find and manage): Below is the dialog that appears when I select the “Edit labels” command.  We can use it to create a new string label for our breakpoints or select an existing one we have already defined.  In this case we’ll create a new label called “Lifetime Management” to describe what these two breakpoints cover: When we press the OK button our two selected breakpoints will be grouped under the newly created “Lifetime Management” label: Filtering/Sorting Breakpoints by Label We can use the “Search” combobox to quickly filter/sort breakpoints by label.  Below we are only showing those breakpoints with the “Lifetime Management” label: Toggling Breakpoints On/Off by Label We can also toggle sets of breakpoints on/off by label group.  We can simply filter by the label group, do a Ctrl-A to select all the breakpoints, and then enable/disable all of them with a single click: Importing/Exporting Breakpoints VS 2010 now supports importing/exporting breakpoints to XML files – which you can then pass off to another developer, attach to a bug report, or simply re-load later.  To export only a subset of breakpoints, you can filter by a particular label and then click the “Export breakpoint” button in the Breakpoints window: Above I’ve filtered my breakpoint list to only export two particular breakpoints (specific to a bug that I’m chasing down).  I can export these breakpoints to an XML file and then attach it to a bug report or email – which will enable another developer to easily setup the debugger in the correct state to investigate it on a separate machine.  Pinned DataTips Visual Studio 2010 also includes some nice new “DataTip pinning” features that enable you to better see and track variable and expression values when in the debugger.  Simply hover over a variable or expression within the debugger to expose its DataTip (which is a tooltip that displays its value)  – and then click the new “pin” button on it to make the DataTip always visible: You can “pin” any number of DataTips you want onto the screen.  In addition to pinning top-level variables, you can also drill into the sub-properties on variables and pin them as well.  Below I’ve “pinned” three variables: “category”, “Request.RawUrl” and “Request.LogonUserIdentity.Name”.  Note that these last two variable are sub-properties of the “Request” object.   Associating Comments with Pinned DataTips Hovering over a pinned DataTip exposes some additional UI within the debugger: Clicking the comment button at the bottom of this UI expands the DataTip - and allows you to optionally add a comment with it: This makes it really easy to attach and track debugging notes: Pinned DataTips are usable across both Debug Sessions and Visual Studio Sessions Pinned DataTips can be used across multiple debugger sessions.  This means that if you stop the debugger, make a code change, and then recompile and start a new debug session - any pinned DataTips will still be there, along with any comments you associate with them.  Pinned DataTips can also be used across multiple Visual Studio sessions.  This means that if you close your project, shutdown Visual Studio, and then later open the project up again – any pinned DataTips will still be there, along with any comments you associate with them. See the Value from Last Debug Session (Great Code Editor Feature) How many times have you ever stopped the debugger only to go back to your code and say: $#@! – what was the value of that variable again??? One of the nice things about pinned DataTips is that they keep track of their “last value from debug session” – and you can look these values up within the VB/C# code editor even when the debugger is no longer running.  DataTips are by default hidden when you are in the code editor and the debugger isn’t running.  On the left-hand margin of the code editor, though, you’ll find a push-pin for each pinned DataTip that you’ve previously setup: Hovering your mouse over a pinned DataTip will cause it to display on the screen.  Below you can see what happens when I hover over the first pin in the editor - it displays our debug session’s last values for the “Request” object DataTip along with the comment we associated with them: This makes it much easier to keep track of state and conditions as you toggle between code editing mode and debugging mode on your projects. Importing/Exporting Pinned DataTips As I mentioned earlier in this post, pinned DataTips are by default saved across Visual Studio sessions (you don’t need to do anything to enable this). VS 2010 also now supports importing/exporting pinned DataTips to XML files – which you can then pass off to other developers, attach to a bug report, or simply re-load later. Combined with the new support for importing/exporting breakpoints, this makes it much easier for multiple developers to share debugger configurations and collaborate across debug sessions. Summary Visual Studio 2010 includes a bunch of great new debugger features – both big and small.  Today’s post shared some of the nice debugger usability improvements. All of the features above are supported with the Visual Studio 2010 Professional edition (the Pinned DataTip features are also supported in the free Visual Studio 2010 Express Editions)  I’ll be covering some of the “big big” new debugging features like Intellitrace, parallel/multithreaded debugging, and dump file analysis in future blog posts.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • ASP.NET MVC 3 Hosting :: Rolling with Razor in MVC v3 Preview

    - by mbridge
    Razor is an alternate view engine for asp.net MVC.  It was introduced in the “WebMatrix” tool and has now been released as part of the asp.net MVC 3 preview 1.  Basically, Razor allows us to replace the clunky <% %> syntax with a much cleaner coding model, which integrates very nicely with HTML.  Additionally, it provides some really nice features for master page type scenarios and you don’t lose access to any of the features you are currently familiar with, such as HTML helper methods. First, download and install the ASP.NET MVC Preview 1.  You can find this at http://www.microsoft.com/downloads/details.aspx?FamilyID=cb42f741-8fb1-4f43-a5fa-812096f8d1e8&displaylang=en. Now, follow these steps to create your first asp.net mvc project using Razor: 1. Open Visual Studio 2010 2. Create a new project.  Select File->New->Project (Shift Control N) 3. You will see the list of project types which should look similar to what’s shown:   4. Select “ASP.NET MVC 3 Web Application (Razor).”  Set the application name to RazorTest and the path to c:projectsRazorTest for this tutorial. If you select accidently select ASPX, you will end up with the standard asp.net view engine and template, which isn’t what you want. 5. For this tutorial, and ONLY for this tutorial, select “No, do not create a unit test project.”  In general, you should create and use a unit test project.  Code without unit tests is kind of like diet ice cream.  It just isn’t very good. Now, once we have this done, our brand new project will be created.    In all likelihood, Visual Studio will leave you looking at the “HomeController.cs” class, as shown below: Immediately, you should notice one difference.  The Index action used to look like: public ActionResult Index () { ViewData[“Message”] = “Welcome to ASP.Net MVC!”; Return View(); } While this will still compile and run just fine, ASP.Net MVC 3 has a much nicer way of doing this: public ActionResult Index() { ViewModel.Message = “Welcome to ASP.Net MVC!”; Return View(); } Instead of using ViewData we are using the new ViewModel object, which uses the new dynamic data typing of .Net 4.0 to allow us to express ourselves much more cleanly.  This isn’t a tutorial on ALL of MVC 3, but the ViewModel concept is one we will need as we dig into Razor. What comes in the box? When we create a project using the ASP.Net MVC 3 Template with Razor, we get a standard project setup, just like we did in ASP.NET MVC 2.0 but with some differences.  Instead of seeing “.aspx” view files and “.ascx” files, we see files with the “.cshtml” which is the default razor extension.  Before we discuss the details of a razor file, one thing to keep in mind is that since this is an extremely early preview, intellisense is not currently enabled with the razor view engine.  This is promised as an updated before the final release.  Just like with the aspx view engine, the convention of the folder name for a set of views matching the controller name without the word “Controller” still stands.  Similarly, each action in the controller will usually have a corresponding view file in the appropriate view directory.  Remember, in asp.net MVC, convention over configuration is key to successful development! The initial template organizes views in the following folders, located in the project under Views: - Account – The default account management views used by the Account controller.  Each file represents a distinct view. - Home – Views corresponding to the appropriate actions within the home controller. - Shared – This contains common view objects used by multiple views.  Within here, master pages are stored, as well as partial page views (user controls).  By convention, these partial views are named “_XXXPartial.cshtml” where XXX is the appropriate name, such as _LogonPartial.cshtml.  Additionally, display templates are stored under here. With this in mind, let us take a look at the index.cshtml file under the home view directory.  When you open up index.cshtml you should see 1:   @inherits System.Web.Mvc.WebViewPage 2:  @{ 3:          View.Title = "Home Page"; 4:       LayoutPage = "~/Views/Shared/_Layout.cshtml"; 5:   } 6:  <h2>@View.Message</h2> 7:  <p> 8:     To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC     9:    Website">http://asp.net/mvc</a>. 10:  </p> So looking through this, we observe the following facts: Line 1 imports the base page that all views (using Razor) are based on, which is System.Web.Mvc.WebViewPage.  Note that this is different than System.Web.MVC.ViewPage which is used by asp.net MVC 2.0 Also note that instead of the <% %> syntax, we use the very simple ‘@’ sign.  The View Engine contains enough context sensitive logic that it can even distinguish between @ in code and @ in an email.  It’s a very clean markup.  Line 2 introduces the idea of a code block in razor.  A code block is a scoping mechanism just like it is in a normal C# class.  It is designated by @{… }  and any C# code can be placed in between.  Note that this is all server side code just like it is when using the aspx engine and <% %>.  Line 3 allows us to set the page title in the client page’s file.  This is a new feature which I’ll talk more about when we get to master pages, but it is another of the nice things razor brings to asp.net mvc development. Line 4 is where we specify our “master” page, but as you can see, you can place it almost anywhere you want, because you tell it where it is located.  A Layout Page is similar to a master page, but it gains a bit when it comes to flexibility.  Again, we’ll come back to this in a later installment.  Line 6 and beyond is where we display the contents of our view.  No more using <%: %> intermixed with code.  Instead, we get to use very clean syntax such as @View.Message.  This is a lot easier to read than <%:@View.Message%> especially when intermixed with html.  For example: <p> My name is @View.Name and I live at @View.Address </p> Compare this to the equivalent using the aspx view engine <p> My name is <%:View.Name %> and I live at <%: View.Address %> </p> While not an earth shaking simplification, it is easier on the eyes.  As  we explore other features, this clean markup will become more and more valuable.

    Read the article

  • OS Analytics - Deep Dive Into Your OS

    - by Eran_Steiner
    Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. We will have a call to discuss this blog - please join us!Date: Thursday, November 1, 2012Time: 11:00 am, Eastern Daylight Time (New York, GMT-04:00)1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833067&UID=1512092402&PW=NY2JhMmFjMmFh&RT=MiMxMQ%3D%3D2. If requested, enter your name and email address.3. If a password is required, enter the meeting password: oracle1234. Click "Join". To join the teleconference:Call-in toll-free number:       1-866-682-4770  (US/Canada)      Other countries:                https://oracle.intercallonline.com/portlets/scheduling/viewNumbers/viewNumber.do?ownerNumber=5931260&audioType=RP&viewGa=true&ga=ONConference Code:       7629343#Security code:            7777# Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data View Solaris services status details Drill down into a process details View the busiest zones if applicable Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. On Solaris machines with zones, you get an extra level of tabs, allowing you to get more information on the different zones: This is a good way to see the busiest zones. For example, one zone may not take a lot of CPU but it can consume a lot of memory, or perhaps network bandwidth. To see the detailed Analytics for each of the zones, simply click each of the zones in the tree and go to its Analytics tab. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. Next, if you view a Solaris machine, you will have a "Services" tab: By default, all services will be displayed, but you can choose to display only certain states, for example, those in maintenance or the degraded ones. You can highlight a service and choose to view the details, where you can see the Dependencies, Dependents and also the location of the service log file (not shown in the picture as you need to scroll down to see the log file). The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect On Solaris, the location is /opt/SUNWxvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • How to configure VPN in Windows XP

    - by SAMIR BHOGAYTA
    VPN Overview A VPN is a private network created over a public one. It’s done with encryption, this way, your data is encapsulated and secure in transit – this creates the ‘virtual’ tunnel. A VPN is a method of connecting to a private network by a public network like the Internet. An internet connection in a company is common. An Internet connection in a Home is common too. With both of these, you could create an encrypted tunnel between them and pass traffic, safely - securely. If you want to create a VPN connection you will have to use encryption to make sure that others cannot intercept the data in transit while traversing the Internet. Windows XP provides a certain level of security by using Point-to-Point Tunneling Protocol (PPTP) or Layer Two Tunneling Protocol (L2TP). They are both considered tunneling protocols – simply because they create that virtual tunnel just discussed, by applying encryption. Configure a VPN with XP If you want to configure a VPN connection from a Windows XP client computer you only need what comes with the Operating System itself, it's all built right in. To set up a connection to a VPN, do the following: 1. On the computer that is running Windows XP, confirm that the connection to the Internet is correctly configured. • You can try to browse the internet • Ping a known host on the Internet, like yahoo.com, something that isn’t blocking ICMP 2. Click Start, and then click Control Panel. 3. In Control Panel, double click Network Connections 4. Click Create a new connection in the Network Tasks task pad 5. In the Network Connection Wizard, click Next. 6. Click Connect to the network at my workplace, and then click Next. 7. Click Virtual Private Network connection, and then click Next. 8. If you are prompted, you need to select whether you will use a dialup connection or if you have a dedicated connection to the Internet either via Cable, DSL, T1, Satellite, etc. Click Next. 9. Type a host name, IP or any other description you would like to appear in the Network Connections area. You can change this later if you want. Click Next. 10. Type the host name or the Internet Protocol (IP) address of the computer that you want to connect to, and then click Next. 11. You may be asked if you want to use a Smart Card or not. 12. You are just about done, the rest of the screens just verify your connection, click Next. 13. Click to select the Add a shortcut to this connection to my desktop check box if you want one, if not, then leave it unchecked and click finish. 14. You are now done making your connection, but by default, it may try to connect. You can either try the connection now if you know its valid, if not, then just close it down for now. 15. In the Network Connections window, right-click the new connection and select properties. Let’s take a look at how you can customize this connection before it’s used. 16. The first tab you will see if the General Tab. This only covers the name of the connection, which you can also rename from the Network Connection dialog box by right clicking the connection and selecting to rename it. You can also configure a First connect, which means that Windows can connect the public network (like the Internet) before starting to attempt the ‘VPN’ connection. This is a perfect example as to when you would have configured the dialup connection; this would have been the first thing that you would have to do. It's simple, you have to be connected to the Internet first before you can encrypt and send data over it. This setting makes sure that this is a reality for you. 17. The next tab is the Options Tab. It is The Options tab has a lot you can configure in it. For one, you have the option to connect to a Windows Domain, if you select this check box (unchecked by default), then your VPN client will request Windows logon domain information while starting to work up the VPN connection. Also, you have options here for redialing. Redial attempts are configured here if you are using a dial up connection to get to the Internet. It is very handy to redial if the line is dropped as dropped lines are very common. 18. The next tab is the Security Tab. This is where you would configure basic security for the VPN client. This is where you would set any advanced IPSec configurations other security protocols as well as requiring encryption and credentials. 19. The next tab is the Networking Tab. This is where you can select what networking items are used by this VPN connection. 20. The Last tab is the Advanced Tab. This is where you can configure options for configuring a firewall, and/or sharing. Connecting to Corporate Now that you have your XP VPN client all set up and ready, the next step is to attempt a connection to the Remote Access or VPN server set up at the corporate office. To use the connection follow these simple steps. To open the client again, go back to the Network Connections dialog box. 1. One you are in the Network Connection dialog box, double-click, or right click and select ‘Connect’ from the menu – this will initiate the connection to the corporate office. 2. Type your user name and password, and then click Connect. Properties bring you back to what we just discussed in this article, all the global settings for the VPN client you are using. 3. To disconnect from a VPN connection, right-click the icon for the connection, and then click “Disconnect” Summary In this article we covered the basics of building a VPN connection using Windows XP. This is very handy when you have a VPN device but don’t have the ‘client’ that may come with it. If the VPN Server doesn’t use highly proprietary protocols, then you can use the XP client to connect with. In a future article I will get into the nuts and bolts of both IPSec and more detail on how to configure the advanced options in the Security tab of this client. 678: The remote computer did not respond. 930: The authentication server did not respond to authentication requests in a timely fashion. 800: Unable to establish the VPN connection. 623: The system could not find the phone book entry for this connection. 720: A connection to the remote computer could not be established. More on : http://www.windowsecurity.com/articles/Configure-VPN-Connection-Windows-XP.html

    Read the article

  • Enabling Service Availability in WCF Services

    - by cibrax
    It is very important for the enterprise to know which services are operational at any given point. There are many factors that can affect the availability of the services, some of them are external like a database not responding or any dependant service not working. However, in some cases, you only want to know whether a service is up or down, so a simple heart-beat mechanism with “Ping” messages would do the trick. Unfortunately, WCF does not provide a built-in mechanism to support this functionality, and you probably don’t to implement a “Ping” operation in any service that you have out there. For solving this in a generic way, there is a WCF extensibility point that comes to help us, the “Operation Invokers”. In a nutshell, an operation invoker is the class responsible invoking the service method with a set of parameters and generate the output parameters with the return value. What I am going to do here is to implement a custom operation invoker that intercepts any call to the service, and detects whether a “Ping” header was attached to the message. If the “Ping” header is detected, the operation invoker returns a new header to tell the client that the service is alive, and the real operation execution is omitted. In that way, we have a simple heart beat mechanism based on the messages that include a "Ping” header, so the client application can determine at any point whether the service is up or down. My operation invoker wraps the default implementation attached by default to any operation by WCF. internal class PingOperationInvoker : IOperationInvoker { IOperationInvoker innerInvoker; object[] outputs = null; object returnValue = null; public const string PingHeaderName = "Ping"; public const string PingHeaderNamespace = "http://tellago.serviceModel"; public PingOperationInvoker(IOperationInvoker innerInvoker, OperationDescription description) { this.innerInvoker = innerInvoker; outputs = description.SyncMethod.GetParameters() .Where(p => p.IsOut) .Select(p => DefaultForType(p.ParameterType)).ToArray(); var returnValue = DefaultForType(description.SyncMethod.ReturnType); } private static object DefaultForType(Type targetType) { return targetType.IsValueType ? Activator.CreateInstance(targetType) : null; } public object Invoke(object instance, object[] inputs, out object[] outputs) { object returnValue; if (Invoke(out returnValue, out outputs)) { return returnValue; } else { return this.innerInvoker.Invoke(instance, inputs, out outputs); } } private bool Invoke(out object returnValue, out object[] outputs) { object untypedProperty = null; if (OperationContext.Current .IncomingMessageProperties.TryGetValue(HttpRequestMessageProperty.Name, out untypedProperty)) { var httpRequestProperty = untypedProperty as HttpRequestMessageProperty; if (httpRequestProperty != null) { if (httpRequestProperty.Headers[PingHeaderName] != null) { outputs = this.outputs; if (OperationContext.Current .IncomingMessageProperties.TryGetValue(HttpRequestMessageProperty.Name, out untypedProperty)) { var httpResponseProperty = untypedProperty as HttpResponseMessageProperty; httpResponseProperty.Headers.Add(PingHeaderName, "Ok"); } returnValue = this.returnValue; return true; } } } var headers = OperationContext.Current.IncomingMessageHeaders; if (headers.FindHeader(PingHeaderName, PingHeaderNamespace) > -1) { outputs = this.outputs; MessageHeader<string> header = new MessageHeader<string>("Ok"); var untyped = header.GetUntypedHeader(PingHeaderName, PingHeaderNamespace); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); returnValue = this.returnValue; return true; } returnValue = null; outputs = null; return false; } } The implementation above looks for the “Ping” header either in the Http Request or the Soap message. The next step is to implement a behavior for attaching this operation invoker to the services we want to monitor. [AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, AllowMultiple = false, Inherited = true)] public class PingBehavior : Attribute, IServiceBehavior, IOperationBehavior { public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, Collection<ServiceEndpoint> endpoints, BindingParameterCollection bindingParameters) { } public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) { } public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) { foreach (var endpoint in serviceDescription.Endpoints) { foreach (var operation in endpoint.Contract.Operations) { if (operation.Behaviors.Find<PingBehavior>() == null) operation.Behaviors.Add(this); } } } public void AddBindingParameters(OperationDescription operationDescription, BindingParameterCollection bindingParameters) { } public void ApplyClientBehavior(OperationDescription operationDescription, ClientOperation clientOperation) { } public void ApplyDispatchBehavior(OperationDescription operationDescription, DispatchOperation dispatchOperation) { dispatchOperation.Invoker = new PingOperationInvoker(dispatchOperation.Invoker, operationDescription); } public void Validate(OperationDescription operationDescription) { } } As an operation invoker can only be added in an “operation behavior”, a trick I learned in the past is that you can implement a service behavior as well and use the “Validate” method to inject it in all the operations, so the final configuration is much easier and cleaner. You only need to decorate the service with a simple attribute to enable the “Ping” functionality. [PingBehavior] public class HelloWorldService : IHelloWorld { public string Hello(string name) { return "Hello " + name; } } On the other hand, the client application needs to send a dummy message with a “Ping” header to detect whether the service is available or not. In order to simplify this task, I created a extension method in the WCF client channel to do this work. public static class ClientChannelExtensions { const string PingNamespace = "http://tellago.serviceModel"; const string PingName = "Ping"; public static bool IsAvailable<TChannel>(this IClientChannel channel, Action<TChannel> operation) { try { using (OperationContextScope scope = new OperationContextScope(channel)) { MessageHeader<string> header = new MessageHeader<string>(PingName); var untyped = header.GetUntypedHeader(PingName, PingNamespace); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); try { operation((TChannel)channel); var headers = OperationContext.Current.IncomingMessageHeaders; if (headers.Any(h => h.Name == PingName && h.Namespace == PingNamespace)) { return true; } else { return false; } } catch (CommunicationException) { return false; } } } catch (Exception) { return false; } } } This extension method basically adds a “Ping” header to the request message, executes the operation passed as argument (Action<TChannel> operation), and looks for the corresponding “Ping” header in the response to see the results. The client application can use this extension with a single line of code, var client = new ServiceReference.HelloWorldClient(); var isAvailable = client.InnerChannel.IsAvailable<IHelloWorld>((c) => c.Hello(null)); The “isAvailable” variable will tell the client application whether the service is available or not. You can download the complete implementation from this location.    

    Read the article

  • CodePlex Daily Summary for Monday, July 01, 2013

    CodePlex Daily Summary for Monday, July 01, 2013Popular ReleasesQuickMon: Version 2.10.3: Mainly just a service release - no major changes. Toolbar buttons on main and config window can now be re-arrange (using ALT key) Added property to disable corrective scriptsDotNetNuke® IFrame: IFrame 04.05.00: New DNN6/7 Manifest file and Azure Compatibility.VidCoder: 1.5.2 Beta: Fixed crash on presets with an invalid bitrate.Roadkill - .NET Wiki engine: Roadkill v1.7: New features in 1.7: New file manager: Multiple file uploads Drag and drop uploads Delete folders (admins only) Delete files (admins only) (Experimental) Syntaxhighlighting custom variable (using https://github.com/alexgorbatchev/SyntaxHighlighter) - use [[[code lang=c#|your code here]]] (Experimental) MathJax custom variable - use [[[Mathjax]]] and $$your tex$$ on the page. Improved black bar theme Site speed improvements for Javascript/CSS files - now just two files files ea...Download Sharepoint Solution package: Release 4: version updated for SP2013WinRT XAML Toolkit: WinRT XAML Toolkit - 1.5: WinRT XAML Toolkit based on the Windows 8.0 and 8.1 Preview SDKs. Do not download the source code from here if you are looking for latest updates! You can download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Composition library for visual tree rende...Gardens Point LEX: Gardens Point LEX version 1.2.1: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.2.1 has new facilities for defining and manipulating character classes. These changes make the construction of large Unicode character classes more convenient. The runtime code for performing automaton backup has been re-implemented, and is now faster for scanners that need backup. Source CodeThe distribution contains a complete VS2010 project for the appli...ZXMAK2: Version 2.7.5.7: - fix TZX emulation (Bruce Lee, Zynaps) - fix ATM 16 colors for border - add memory module PROFI 512K; add PROFI V03 rom image; fix PROFI 3.XX configTwitter image Downloader: Twitter Image Downloader 2 with Installer: Application file with Install shield and Dot Net 4.0 redistributableUltimate Music Tagger: Ultimate Music Tagger 1.0.0.0: First release of Ultimate Music TaggerBlackJumboDog: Ver5.9.2: 2013.06.28 Ver5.9.2 (1) ??????????(????SMTP?????)?????????? (2) HTTPS???????????Outlook 2013 Add-In: Configuration Form: This new version includes the following changes: - Refactored code a bit. - Removing configuration from main form to gain more space to display items. - Moved configuration to separate form. You can click the little "gear" icon to access the configuration form (still very simple). - Added option to show past day appointments from the selected day (previous in time, that is). - Added some tooltips. You will have to uninstall the previous version (add/remove programs) if you had installed it ...Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for Windows Azure Media Services Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expectin...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.95: update parser to allow for CSS3 calc( function to nest. add recognition of -pponly (Preprocess-Only) switch in AjaxMinManifestTask build task. Fix crashing bug in EXE when processing a manifest file using the -xml switch and an error message needs to be displayed (like a missing input file). Create separate Clean and Bundle build tasks for working with manifest files (AjaxMinManifestCleanTask and AjaxMinBundleTask). Removed the IsCleanOperation from AjaxMinManifestTask -- use AjaxMinMan...VG-Ripper & PG-Ripper: VG-Ripper 2.9.44: changes NEW: Added Support for "ImgChili.net" links FIXED: Auto UpdaterDocument.Editor: 2013.25: What's new for Document.Editor 2013.25: Improved Spell Check support Improved User Interface Minor Bug Fix's, improvements and speed upsNew ProjectsAerCloud.net Client - Java, Linux & Windows: This project source code provides a step by step guide for using AerCloud.net Framework as a Service API. For more information please visit http://www.aercloudAmiClient – Asterisk Manager Interface (AMI) client based on the Rx Framework: Asterisk Manager Interface (AMI) client based on the Rx Frameworkbaidupan: cdcddddC#??????: C#??????ImageHelper: imagehelperIP switcher: IP switcher is a simple tool for switching settings, and store presets, on networkadapters.MastersProject: A MS project with a goal of creating a fully Code Contracts verified physics engine and a relatively simple game that uses it.Multiplatform card game: Example multipatform project.PhoneTools: A collection of tools designed to help developers create beautiful Windows Phone 8 apps.rodidexter: lllSharePoint 2013 List Item Encryption: This coding exercise project enables you to encrypt/decrypt list item text field in the browser using industry standard algorithms.tvaSoft: simulation, rotor dynamics, Finite Element Analisys, FEM, ODE, torsional vibration, flexural vibrationX3DML Project: X3DML is an xml-based markup language that defines rules for modeling 3D scenes from a tag-based document. It may be usefull in 3D web design and VR.zhuang-tfs: zhuang tfs

    Read the article

  • Tailoring the Oracle Fusion Applications User Interface with Oracle Composer

    - by mvaughan
    By Killian Evers, Oracle Applications User Experience Changing the user interface (UI) is one of the most common modifications customers perform to Oracle Fusion Applications. Typically, customers add or remove a field based on their needs. Oracle makes the process of tailoring easier for customers, and reduces the burden for their IT staff, which you can read about on the Usable Apps website or in an earlier VoX post.This is the first in a series of posts that will talk about the tools that Oracle has provided for tailoring with its family of composers. These tools are designed for business systems analysts, and they allow employees other than IT staff to make changes in an upgrade-safe and patch-friendly manner. Let’s take a deep dive into one of these composers, the Oracle Composer. Oracle Composer allows business users to modify existing UIs after they have been deployed and are in use. It is an integral component of our SaaS offering. Using Oracle Composer, users can control:     •    Who sees the changes     •    When the changes are made     •    What changes are made Change for me, change for you, change for all of youOne of the most powerful aspects of Oracle Composer is its flexibility. Oracle uses Oracle Composer to make changes for a user or group of users – those who see the changes. A user of Oracle Fusion Applications can make changes to the user interface at runtime via Oracle Composer, and these changes will remain every time they log into the system. For example, they can rearrange certain objects on a page, add and remove designated content, and save queries.Business systems analysts can make changes to Oracle Fusion Application UIs for groups of users or all users. Oracle’s Fusion Middleware Metadata Services (MDS) stores these changes and retrieves them at runtime, merging customizations with the base metadata and revealing the final experience to the end user. A tailored application can have multiple customization layers, and some layers can be specific to certain Fusion Applications. Some examples of customization layers are: site, organization, country, or role. Customization layers are applied in a specific order of precedence on top of the base application metadata. This image illustrates how customization layers are applied.What time is it?Users make changes to UIs at design time, runtime, and design time at runtime. Design time changes are typically made by application developers using an integrated development environment, or IDE, such as Oracle JDeveloper. Once made, these changes are then deployed to managed servers by application administrators. Oracle Composer covers the other two areas: Runtime changes and design time at runtime changes. When we say users are making changes at runtime, we mean that the changes are made within the running application and take effect immediately in the running application. A prime example of this ability is users who make changes to their running application that only affect the UIs they see. What is new with Oracle Composer is the last area: Design time at runtime.  A business systems analyst can make changes to the UIs at runtime but does not have to make those changes immediately to the application. These changes are stored as metadata, separate from the base application definitions. Customizations made at runtime can be saved in a sandbox so that the changes can be isolated and validated before being published into an environment, without the need to redeploy the application. What can I do?Oracle Composer can be run in one of two modes. Depending on which mode is chosen, you may have different capabilities available for changing the UIs. The first mode is view mode, the most common default mode for most pages. This is the mode that is used for personalizations or user customizations. Users can access this mode via the Personalization link (see below) in the global region on Oracle Fusion Applications pages. In this mode, you can rearrange components on a page with drag-and-drop, collapse or expand components, add approved external content, and change the overall layout of a page. However, all of the changes made this way are exclusive to that particular user.The second mode, edit mode, is typically made available to select users with access privileges to edit page content. We call these folks business systems analysts. This mode is used to make UI changes for groups of users. Users with appropriate privileges can access the edit mode of Oracle Composer via the Administration menu (see below) in the global region on Oracle Fusion Applications pages. In edit mode, users can also add components, delete components, and edit component properties. While in edit mode in Oracle Composer, there are two views that assist the business systems analyst with making UI changes: Design View and Source View (see below). Design View, the default view, is a WYSIWYG rendering of the page and its content. The business systems analyst can perform these actions: Add content – including custom content like a portlet displaying news or stock quotes, or predefined content delivered from Oracle Fusion Applications (including ADF components and task flows) Rearrange content – performed via drag-and-drop on the page or by using the actions menu of a component or portlet to move content around Edit component properties and parameters – for specific components, control the visual properties such as text or display labels, or parameters such as RSS feeds Hide or show components – hidden components can be re-shown Delete components Change page layout – users can select from eight pre-defined layouts Edit page properties – create or edit a page’s parameters and display properties Reset page customizations – remove edits made to the page in the current layer and/or reset the page to a previous state. Detailed information on each of these capabilities and the additional actions not covered in the list above can be found in the Oracle® Fusion Middleware Developer's Guide for Oracle WebCenter.This image shows what the screen looks like in Design View.Source View, the second option in the edit mode of Oracle Composer, provides a WYSIWYG and a hierarchical rendering of page components in a component navigator. In Source View, users can access and modify properties of components that are not otherwise selectable in Design View. For example, many ADF Faces components can be edited only in Source View. Users can also edit components within a task flow. This image shows what the screen looks like in Source View.Detailed information on Source View can be found in the Oracle® Fusion Middleware Developer's Guide for Oracle WebCenter.Oracle Composer enables any application or portal to be customized or personalized after it has been deployed and is in use. It is designed to be extremely easy to use so that both business systems analysts and users can edit Oracle Fusion Applications pages with a few clicks of the mouse. Oracle Composer runs in all modern browsers and provides a rich, dynamic way to edit JSF application and portal pages.From the editor: The next post in this series about composers will be on Data Composer. You can also catch Killian speaking about extensibility at OpenWorld 2012 and in her Faces of Fusion video.

    Read the article

  • Physical Directories vs. MVC View Paths

    - by Rick Strahl
    This post falls into the bucket of operator error on my part, but I want to share this anyway because it describes an issue that has bitten me a few times now and writing it down might keep it a little stronger in my mind. I've been working on an MVC project the last few days, and at the end of a long day I accidentally moved one of my View folders from the MVC Root Folder to the project root. It must have been at the very end of the day before shutting down because tests and manual site navigation worked fine just before I quit for the night. I checked in changes and called it a night. Next day I came back, started running the app and had a lot of breaks with certain views. Oddly custom routes to these controllers/views worked, but stock /{controller}/{action} routes would not. After a bit of spelunking I realized that "Hey one of my View Folders is missing", which made some sense given the error messages I got. I looked in the recycle bin - nothing there, so rather than try to figure out what the hell happened, just restored from my last SVN checkin. At this point the folders are back… but… view access  still ends up breaking for this set of views. Specifically I'm getting the Yellow Screen of Death with: CS0103: The name 'model' does not exist in the current context Here's the full error: Server Error in '/ClassifiedsWeb' Application. Compilation ErrorDescription: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately.Compiler Error Message: CS0103: The name 'model' does not exist in the current contextSource Error: Line 1: @model ClassifiedsWeb.EntryViewModel Line 2: @{ Line 3: ViewBag.Title = Model.Entry.Title + " - " + ClassifiedsBusiness.App.Configuration.ApplicationName; Source File: c:\Projects2010\Clients\GorgeNet\Classifieds\ClassifiedsWeb\Classifieds\Show.cshtml    Line: 1 Compiler Warning Messages: Show Detailed Compiler Output: Show Complete Compilation Source: Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.272 Here's what's really odd about this error: The views now do exist in the /Views/Classifieds folder of the project, but it appears like MVC is trying to execute the views directly. This is getting pretty weird, man! So I hook up some break points in my controllers to see if my controller actions are getting fired - and sure enough it turns out they are not - but only for those views that were previously 'lost' and then restored from SVN. WTF? At this point I'm thinking that I must have messed up one of the config files, but after some more spelunking and realizing that all the other Controller views work, I give up that idea. Config's gotta be OK if other controllers and views are working. Root Folders and MVC Views don't mix As I mentioned the problem was the fact that I inadvertantly managed to drag my View folder to the root folder of the project. Here's what this looks like in my FUBAR'd project structure after I copied back /Views/Classifieds folder from SVN: There's the actual root folder in the /Views folder and the accidental copy that sits of the root. I of course did not notice the /Classifieds folder at the root because it was excluded and didn't show up in the project. Now, before you call me a complete idiot remember that this happened by accident - an accidental drag probably just before shutting down for the night. :-) So why does this break? MVC should be happy with views in the /Views/Classifieds folder right? While MVC might be happy, IIS is not. The fact that there is a physical folder on disk takes precedence over MVC's routing. In other words if a URL exists that matches a route the pysical path is accessed first. What happens here is that essentially IIS is trying to execute the .cshtml pages directly without ever routing to the Controller methods. In the error page I showed above my clue should have been that the view was served as: c:\Projects2010\Clients\GorgeNet\Classifieds\ClassifiedsWeb\Classifieds\Show.cshtml rather than c:\Projects2010\Clients\GorgeNet\Classifieds\ClassifiedsWeb\Views\Classifieds\Show.cshtml But of course I didn't notice that right away, just skimming to the end and looking at the file name. The reason that /classifieds/list actually fires that file is that the ASP.NET Web Pages engine looks for physical files on disk that match a path. IOW, when calling Web Pages you drop the .cshtml off the Razor page and IIS will serve that just fine. So: /classifieds/list looks and tries to find /classifieds/list.cshtml and executes that script. And that is exactly what's happening. Web Pages is trying to execute the .cshtml file and it fails because Web Pages knows nothing about the @model tag which is an MVC specific template extension. This is why my breakpoints in the controller methods didn't fire and it also explains why the error mentions that the @model key word is invalid (@model is an MVC provided template enhancement to the Razor Engine). The solution of course is super simple: Delete the accidentally created root folder and the problem is solved. Routing and Physical Paths I've run into problems with this before actually. In the past I've had a number of applications that had a physical /Admin folder which also would conflict with an MVC Admin controller. More than once I ended up wondering why the index route (/Admin/) was not working properly. If a physical /Admin folder exists /Admin will not route to the Index action (or whatever default action you have set up, but instead try to list the directory or show the default document in the folder. The only way to force the index page through MVC is to explicitly use /Admin/Index. Makes perfect sense once you realize the physical folder is there, but that's easy to forget in an MVC application. As you might imagine after a few times of running into this I gave up on the Admin folder and moved everything into MVC views to handle those operations. Still it's one of those things that can easily bite you, because the behavior and error messages seem to point at completely different  problems. Moral of the story is: If you see routing problems where routes are not reaching obvious controller methods, always check to make sure there's isn't a physical path being mapped by IIS instead. That way you won't feel stupid like I did after trying a million things for about an hour before discovering my sloppy mousing behavior :-)© Rick Strahl, West Wind Technologies, 2005-2012Posted in MVC   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • nginx problem accessing virtual hosts

    - by Sc0rian
    I am setting up nginx as a reverse proxy. The server runs on directadmin and lamp stack. I have nginx running on port 81. I can access all my sites (including virtual ips) on the port 81. However when I forward the traffic from port 80 to 81, the virtual ips have a message saying "Apache is running normally". Server IPs are fine, and I can still access virtual IP's on 81. [root@~]# netstat -an | grep LISTEN | egrep ":80|:81" tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <serverip>:81 0.0.0.0:* LISTEN tcp 0 0 :::80 :::* LISTEN apache 24090 0.6 1.3 29252 13612 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24092 0.9 2.1 39584 22056 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24096 0.2 1.9 35892 20256 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24120 0.3 1.7 35752 17840 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24495 0.0 1.4 30892 14756 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24496 1.0 2.1 39892 22164 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24516 1.5 3.6 55496 38040 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24519 0.1 1.2 28996 13224 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24521 2.7 4.0 58244 41984 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24522 0.0 1.2 29124 12672 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24524 0.0 1.1 28740 12364 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24535 1.1 1.7 36008 17876 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24536 0.0 1.1 28592 12084 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24537 0.0 1.1 28592 12112 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24539 0.0 0.0 0 0 ? Z 18:35 0:00 [httpd] <defunct> apache 24540 0.0 1.1 28592 11540 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24541 0.0 1.1 28592 11548 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL root 24548 0.0 0.0 4132 752 pts/0 R+ 18:35 0:00 egrep apache|nginx root 28238 0.0 0.0 19576 284 ? Ss May29 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf apache 28239 0.0 0.0 19888 804 ? S May29 0:00 nginx: worker process apache 28240 0.0 0.0 19888 548 ? S May29 0:00 nginx: worker process apache 28241 0.0 0.0 19736 484 ? S May29 0:00 nginx: cache manager process here is my nginx conf: cat /usr/local/nginx/conf/nginx.conf user apache apache; worker_processes 2; # Set it according to what your CPU have. 4 Cores = 4 worker_rlimit_nofile 8192; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; server_tokens off; access_log /var/log/nginx_access.log main; error_log /var/log/nginx_error.log debug; server_names_hash_bucket_size 64; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 30; gzip on; gzip_comp_level 9; gzip_proxied any; proxy_buffering on; proxy_cache_path /usr/local/nginx/proxy_temp levels=1:2 keys_zone=one:15m inactive=7d max_size=1000m; proxy_buffer_size 16k; proxy_buffers 100 8k; proxy_connect_timeout 60; proxy_send_timeout 60; proxy_read_timeout 60; server { listen <server ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <server host name> _; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<server ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } } include /usr/local/nginx/vhosts/*.conf; } here is my vhost conf: # cat /usr/local/nginx/vhosts/1.conf server { listen <virt ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <virt domain name>.com ; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<virt ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } }

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >