Search Results

Search found 26473 results on 1059 pages for 'base url'.

Page 171/1059 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • CentOs 6.3 can't install Git

    - by drivard
    I am trying to install git on an OpenVZ container using the CentOs 6.3 precreated template. When I try the command line yum install git I get the message: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: www.cubiculestudio.com * epel: mirror.csclub.uwaterloo.ca * extras: www.cubiculestudio.com * rpmforge: mirror.us.leaseweb.net * updates: www.cubiculestudio.com Setting up Install Process No package git available. Error: Nothing to do As I understand, the git package should be in the centos6 base repository: http://pkgs.org/centos-6-rhel-6/centos-rhel-x86_64/git-1.7.1-2.el6_0.1.x86_64.rpm.html But it doesn't find it, I even have the EPEL and RPMForge repo enabled, but still can't find the git package. yum repolist Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: www.cubiculestudio.com * epel: mirror.csclub.uwaterloo.ca * extras: www.cubiculestudio.com * rpmforge: mirror.us.leaseweb.net * updates: www.cubiculestudio.com repo id repo name status base CentOS-6 - Base 4,776 epel Extra Packages for Enterprise Linux 6 - i386 6,523 extras CentOS-6 - Extras 4 rpmforge RHEL 6 - RPMforge.net - dag 4,501 updates CentOS-6 - Updates 596 vz-base vz-base 3 vz-updates vz-updates 0 repolist: 16,403 The weirdest thing is that my OpenVZ server is running on CentOs 6.3 and I was able to install git without any issue. Can you help me understanding why it doesn't find the package? Thank you in advance.

    Read the article

  • Error in requiring Sinatra gem

    - by Dan
    I'm having a hard time getting Sinatra running on my local setup, Ubuntu Karmic 9.10. The error getting thrown when I have require 'sinatra' is: NoMethodError: undefined method `[]' for nil:NilClass from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:891:in `compile' from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:883:in `gsub' from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:883:in `compile' from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:856:in `route' from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:838:in `get' from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:1077 from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:929:in `configure' from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:1076 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra.rb:4 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from (irb):2 from :0 I've tried: Uninstalling/reinstalling Sinatra Updating all gems Ensuring all dependencies exist (rack) Any ideas? Your time and help is greatly appreciated!

    Read the article

  • raw_id_fields for modelforms

    - by nbv4
    I have a modelform which has one field that is a ForeignKey value to a model which as 40,000 rows. The default modelform tries to create a select box with 40,000 options, which, to say the least is not ideal. Even more so when this modelform is used in a formset factory! In the admin, this is easiely avoidable by using "raw_id_fields", but there doesn't seem to be a modelform equivalent. How can I do this? Here is my modelform: class OpBaseForm(ModelForm): base = forms.CharField() class Meta: model = OpBase exclude = ['operation', 'routes'] extra = 0 raw_id_fields = ('base', ) #does nothing The first bolded line works by not creating the huge unwieldy selectbox, but when I try to save a fieldset of this form, I get the error: "OpBase.base" must be a "Base" instance. In order for the modelform to be saved, 'base' needs to be a Base instance. Apparently, a string representation of a Base primary key isn't enough (at least not automatically). I need some kind of mechanism to change the string that is given my the form, to a Base instance. And this mechanism has to work in a formset. Any ideas? If only raw_id_fields would work, this would be easy as cake. But as far as I can tell, it only is available in the admin.

    Read the article

  • Emulating Dynamic Dispatch in C++ based on Template Parameters

    - by Jon Purdy
    This is heavily simplified for the sake of the question. Say I have a hierarchy: struct Base { virtual int precision() const = 0; }; template<int Precision> struct Derived : public Base { typedef Traits<Precision>::Type Type; Derived(Type data) : value(data) {} virtual int precision() const { return Precision; } Type value; }; I want a function like: Base* function(const Base& a, const Base& b); Where the specific type of the result of the function is the same type as whichever of first and second has the greater Precision; something like the following pseudocode: template<class T> T* operation(const T& a, const T& b) { return new T(a.value + b.value); } Base* function(const Base& a, const Base& b) { if (a.precision() > b.precision()) return operation((A&)a, A(b.value)); else if (a.precision() < b.precision()) return operation(B(a.value), (B&)b); else return operation((A&)a, (A&)b); } Where A and B are the specific types of a and b, respectively. I want f to operate independently of how many instantiations of Derived there are. I'd like to avoid a massive table of typeid() comparisons, though RTTI is fine in answers. Any ideas?

    Read the article

  • WM_NOTIFY and superclass chaining issue in Win32

    - by DasMonkeyman
    For reference I'm using the window superclass method outlined in this article. The specific issue occurs if I want to handle WM_NOTIFY messages (i.e. for custom drawing) from the base control in the superclass I either need to reflect them back from the parent window or set my own window as the parent (passed inside CREATESTRUCT for WM_(NC)CREATE to the base class). This method works fine if I have a single superclass. If I superclass my superclass then I run into a problem. Now 3 WindowProcs are operating in the same HWND, and when I reflect WM_NOTIFY messages (or have them sent to myself from the parent trick above) they always go to the outermost (most derived) WindowProc. I have no way to tell if they're messages intended for the inner superclass (base messages are supposed to go to the first superclass) or messages intended for the outer superclass (messages from the inner superclass are intended for the outer superclass). These messages are indistinguishable because they all come from the same HWND with the same control ID. Is there any way to resolve this without creating a new window to encapsulate each level of inheritance? Sorry about the wall of text. It's a difficult concept to explain. Here's a diagram. single superclass: SuperA::WindowProc() - Base::WindowProc()---\ ^--------WM_NOTIFY(Base)--------/ superclass of a superclass: SuperB::WindowProc() - SuperA::WindowProc() - Base::WindowProc()---\ ^--------WM_NOTIFY(Base)--------+-----------------------/ ^--------WM_NOTIFY(A)-----------/ The WM_NOTIFY messages in the second case all come from the same HWND and control ID, so I cannot distinguish between the messages intended for SuperA (from Base) and messages intended for SuperB (from SuperA). Any ideas?

    Read the article

  • implementing a read/write field for an interface that only defines read

    - by PaulH
    I have a C# 2.0 application where a base interface allows read-only access to a value in a concrete class. But, within the concrete class, I'd like to have read/write access to that value. So, I have an implementation like this: public abstract class Base { public abstract DateTime StartTime { get; } } public class Foo : Base { DateTime start_time_; public override DateTime StartTime { get { return start_time_; } internal set { start_time_ = value; } } } But, this gives me the error: Foo.cs(200,22): error CS0546: 'Foo.StartTime.set': cannot override because 'Base.StartTime' does not have an overridable set accessor I don't want the base class to have write access. But, I do want the concrete class to provide read/write access. Is there a way to make this work? Thanks, PaulH Unfortunately, Base can't be changed to an interface as it contains non-abstract functionality also. Something I should have thought to put in the original problem description. public abstract class Base { public abstract DateTime StartTime { get; } public void Buzz() { // do something interesting... } } My solution is to do this: public class Foo : Base { DateTime start_time_; public override DateTime StartTime { get { return start_time_; } } internal void SetStartTime { start_time_ = value; } } It's not as nice as I'd like, but it works.

    Read the article

  • C++ polymorphism and slicing

    - by Draco Ater
    The following code, prints out Derived Base Base But I need every Derived object put into User::items, call its own print function, but not the base class one. Can I achieve that without using pointers? If it is not possible, how should I write the function that deletes User::items one by one and frees memory, so that there should not be any memory leaks? #include <iostream> #include <vector> #include <algorithm> using namespace std; class Base{ public: virtual void print(){ cout << "Base" << endl;} }; class Derived: public Base{ public: void print(){ cout << "Derived" << endl;} }; class User{ public: vector<Base> items; void add_item( Base& item ){ item.print(); items.push_back( item ); items.back().print(); } }; void fill_items( User& u ){ Derived d; u.add_item( d ); } int main(){ User u; fill_items( u ); u.items[0].print(); }

    Read the article

  • The best way to write a jQuery plugin - If there is such a way?

    - by Nick Lowman
    There are quite a few ways to write plugins i.e. here's a nice example and what I've seen quite a lot of lately is the following code pattern and it's used by Doug Neiner here; (function($){ $.formatLink = function(el, options){ var base = this; base.$el = $(el); base.el = el; base.$el.data("formatLink", base); base.init = function(){ base.options = $.extend({}, $.formatLink.defaultOptions, options); //code here } base.init(); }; $.formatLink.defaultOptions = { }; $.fn.formatLink = function(options){ return this.each(function(){ (new $.formatLink(this, options)); }); }; })(jQuery); So, can anyone tell me the benefits of using the pattern above rather than the one below. I can't see the point in calling the $.extend function for every element in the jQuery stack (above), where the example below only does this once and then works on the stack. To test it I created two plugins, using both patterns, which applied styles to about 5000 li elements and the code below took about 1 second whereas the pattern above took about 1.3 seconds. (function($){ $.fn.formatLink = function(options){ var options = $.extend({}, $.fn.formatLink.defaultOptions, options || {}); return this.each(function(){ //code here }); }); $.fn.formatLink.defaultOptions ={} })(jQuery);

    Read the article

  • How to add a has_many association on all models

    - by joshsz
    Right now I have an initializer that does this: ActiveRecord::Base.send :has_many, :notes, :as => :notable ActiveRecord::Base.send :accepts_nested_attributes_for, :notes It builds the association just fine, except when I load a view that uses it, the second load gives me: can't dup NilClass from: /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2184:in `dup' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2184:in `scoped_methods' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2188:in `current_scoped_methods' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2171:in `scoped?' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2439:in `send' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2439:in `initialize' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/reflection.rb:162:in `new' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/reflection.rb:162:in `build_association' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/associations/association_collection.rb:423:in `build_record' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/associations/association_collection.rb:102:in `build' (my app)/controllers/manifests_controller.rb:21:in `show' Any ideas? Am I doing this the wrong way? Interestingly if I move the association onto just the model I'm working with at the moment, I don't get this error. I figure I must be building the global association incorrectly.

    Read the article

  • Will this cause a problem with different runtimes with DLL?

    - by Milo
    My gui application supports polymorphic timed events so that means that the user calls new, and the gui calls delete. This can create a problem if the runtimes are incompatible. So I was told a proposed solution would be this: class base; class Deallocator { void operator()(base* ptr) { delete ptr; } } class base { public: base(Deallocator dealloc) { m_deleteFunc = dealloc; } ~base() { m_deleteFunc(this); } private: Deallocator m_deleteFunc; } int main { Deallocator deletefunc; base baseObj(deletefunc); } While this is a good solution, it does demand that the user create a Deallocator object which I do not want. I was however wondering if I provided a Deallocator to each derived class: eg class derived : public base { Deallocator dealloc; public: Derived() : base(dealloc); { } }; I think this still does not work though. The constraint is that: The addTimedEvent() function is part of the Widget class which is also in the dll, but it is instanced by the user. The other constraint is that some classes which derive from Widget call this function with their own timed event classes. Given that "he who called new must call delete" what could work given these constraints? Thanks

    Read the article

  • dynamic inheritance without touching classes

    - by Jasper
    I feel like the answer to this question is really simple, but I really am having trouble finding it. So here goes: Suppose you have the following classes: class Base; class Child : public Base; class Displayer { public: Displayer(Base* element); Displayer(Child* element); } Additionally, I have a Base* object which might point to either an instance of the class Base or an instance of the class Child. Now I want to create a Displayer based on the element pointed to by object, however, I want to pick the right version of the constructor. As I currently have it, this would accomplish just that (I am being a bit fuzzy with my C++ here, but I think this the clearest way) object->createDisplayer(); virtual void Base::createDisplayer() { new Displayer(this); } virtual void Child::createDisplayer() { new Displayer(this); } This works, however, there is a problem with this: Base and Child are part of the application system, while Displayer is part of the GUI system. I want to build the GUI system independently of the Application system, so that it is easy to replace the GUI. This means that Base and Child should not know about Displayer. However, I do not know how I can achieve this without letting the Application classes know about the GUI. Am I missing something very obvious or am I trying something that is not possible?

    Read the article

  • calling a function from a set of overloads depending on the dynamic type of an object

    - by Jasper
    I feel like the answer to this question is really simple, but I really am having trouble finding it. So here goes: Suppose you have the following classes: class Base; class Child : public Base; class Displayer { public: Displayer(Base* element); Displayer(Child* element); } Additionally, I have a Base* object which might point to either an instance of the class Base or an instance of the class Child. Now I want to create a Displayer based on the element pointed to by object, however, I want to pick the right version of the constructor. As I currently have it, this would accomplish just that (I am being a bit fuzzy with my C++ here, but I think this the clearest way) object->createDisplayer(); virtual void Base::createDisplayer() { new Displayer(this); } virtual void Child::createDisplayer() { new Displayer(this); } This works, however, there is a problem with this: Base and Child are part of the application system, while Displayer is part of the GUI system. I want to build the GUI system independently of the Application system, so that it is easy to replace the GUI. This means that Base and Child should not know about Displayer. However, I do not know how I can achieve this without letting the Application classes know about the GUI. Am I missing something very obvious or am I trying something that is not possible?

    Read the article

  • Nginx+Passenger: 502 Bad Gateway from Nginx when passing urlencoded URLs in GET vars

    - by jimeh
    Here's an example of the URLs that don't work: http://domain/do?url=http%3A%2F%2Fwww.linkedin.com%2Fin%2Fperson http://domain/do?url=http%3A%2F%2Fwww.linkedin.com%2F However, the following URL does work: http://domain/do?url=http%3A%2F%2Fwww.linkedin.com Also, this only happens with Nginx, using Passenger with Apache it works fine, but we use Nginx on our production machines. Here's the entry in Nginx's error log: 2009/12/01 09:30:51 [error] 6407#0: *136 upstream prematurely closed connection while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: domain, request: "GET /do?url=http%3A%2F%2Fwww.linkedin.com%2F HTTP/1.1", upstream: "passenger://unix:/tmp/passenger.6335/master/helper_server.sock:", host: "domain"

    Read the article

  • Lighttpd with FastCGI configuration running ViewVC - rewrite problems

    - by 0xC0000022L
    At the moment I am struggling with the configuration of lighttpd together with ViewVC. The configuration was ported from Apache 2.2.x, which is still running on the machine, serving the WebDAV/SVN stuff, being proxied through. Now, the problem I am having appears to be with the rewrite rules and I'm not really sure what I am missing here. Here's my configuration (slightly condensed to keep it concise): var.hgwebfcgi = "/var/www/vcs/bin/hgweb.fcgi" var.viewvcfcgi = "/var/www/vcs/bin/wsgi/viewvc.fcgi" var.viewvcstatic = "/var/www/vcs/templates/docroot" var.vcs_errorlog = "/var/log/lighttpd/error.log" var.vcs_accesslog = "/var/log/lighttpd/access.log" $HTTP["host"] =~ "domain.tld" { $SERVER["socket"] == ":443" { protocol = "https://" ssl.engine = "enable" ssl.pemfile = "/etc/lighttpd/ssl/..." ssl.ca-file = "/etc/lighttpd/ssl/..." ssl.use-sslv2 = "disable" setenv.add-environment = ( "HTTPS" => "on" ) url.rewrite-once += ("^/mercurial$" => "/mercurial/" ) url.rewrite-once += ("^/$" => "/viewvc.fcgi" ) alias.url += ( "/viewvc-static" => var.viewvcstatic ) alias.url += ( "/robots.txt" => var.robots ) alias.url += ( "/favicon.ico" => var.favicon ) alias.url += ( "/mercurial" => var.hgwebfcgi ) alias.url += ( "/viewvc.fcgi" => var.viewvcfcgi ) $HTTP["url"] =~ "^/mercurial" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.hgwebfcgi, "socket" => "/tmp/hgwebdir.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } else $HTTP["url"] =~ "^/viewvc\.fcgi" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.viewvcfcgi, "socket" => "/tmp/viewvc.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } expire.url = ( "/viewvc-static" => "access plus 60 days" ) server.errorlog = var.vcs_errorlog accesslog.filename = var.vcs_accesslog } } Now, when I access the domain.tld, I correctly see the index of the repositories. However, when I look at the links for each respective repository (or click them, for that matter), it's of the form https://domain.tld/viewvc.fcgi/reponame instead of the intended https://domain.tld/reponame. What do I have to change/add to achieve this? Do I have to "abuse" the index file mechanism somehow? Goal is to keep the /mercurial alias functional. So far I've tried sifting through the lighttpd book from Packt again, also through the lighttpd documentation, but found nothing that seemed to match the problem.

    Read the article

  • Behind ASP.NET MVC Mock Objects

    - by imran_ku07
       Introduction:           I think this sentence now become very familiar to ASP.NET MVC developers that "ASP.NET MVC is designed with testability in mind". But what ASP.NET MVC team did for making applications build with ASP.NET MVC become easily testable? Understanding this is also very important because it gives you some help when designing custom classes. So in this article i will discuss some abstract classes provided by ASP.NET MVC team for the various ASP.NET intrinsic objects, including HttpContext, HttpRequest, and HttpResponse for making these objects as testable. I will also discuss that why it is hard and difficult to test ASP.NET Web Forms.      Description:           Starting from Classic ASP to ASP.NET MVC, ASP.NET Intrinsic objects is extensively used in all form of web application. They provide information about Request, Response, Server, Application and so on. But ASP.NET MVC uses these intrinsic objects in some abstract manner. The reason for this abstraction is to make your application testable. So let see the abstraction.           As we know that ASP.NET MVC uses the same runtime engine as ASP.NET Web Form uses, therefore the first receiver of the request after IIS and aspnet_filter.dll is aspnet_isapi.dll. This will start the application domain. With the application domain up and running, ASP.NET does some initialization and after some initialization it will call Application_Start if it is defined. Then the normal HTTP pipeline event handlers will be executed including both HTTP Modules and global.asax event handlers. One of the HTTP Module is registered by ASP.NET MVC is UrlRoutingModule. The purpose of this module is to match a route defined in global.asax. Every matched route must have IRouteHandler. In default case this is MvcRouteHandler which is responsible for determining the HTTP Handler which returns MvcHandler (which is derived from IHttpHandler). In simple words, Route has MvcRouteHandler which returns MvcHandler which is the IHttpHandler of current request. In between HTTP pipeline events the handler of ASP.NET MVC, MvcHandler.ProcessRequest will be executed and shown as given below,          void IHttpHandler.ProcessRequest(HttpContext context)          {                    this.ProcessRequest(context);          }          protected virtual void ProcessRequest(HttpContext context)          {                    // HttpContextWrapper inherits from HttpContextBase                    HttpContextBase ctxBase = new HttpContextWrapper(context);                    this.ProcessRequest(ctxBase);          }          protected internal virtual void ProcessRequest(HttpContextBase ctxBase)          {                    . . .          }             HttpContextBase is the base class. HttpContextWrapper inherits from HttpContextBase, which is the parent class that include information about a single HTTP request. This is what ASP.NET MVC team did, just wrap old instrinsic HttpContext into HttpContextWrapper object and provide opportunity for other framework to provide their own implementation of HttpContextBase. For example           public class MockHttpContext : HttpContextBase          {                    . . .          }                     As you can see, it is very easy to create your own HttpContext. That's what did the third party mock frameworks like TypeMock, Moq, RhinoMocks, or NMock2 to provide their own implementation of ASP.NET instrinsic objects classes.           The key point to note here is the types of ASP.NET instrinsic objects. In ASP.NET Web Form and ASP.NET MVC. For example in ASP.NET Web Form the type of Request object is HttpRequest (which is sealed) and in ASP.NET MVC the type of Request object is HttpRequestBase. This is one of the reason that makes test in ASP.NET WebForm is difficult. because their is no base class and the HttpRequest class is sealed, therefore it cannot act as a base class to others. On the other side ASP.NET MVC always uses a base class to give a chance to third parties and unit test frameworks to create thier own implementation ASP.NET instrinsic object.           Therefore we can say that in ASP.NET MVC, instrinsic objects are of type base classes (for example HttpContextBase) .Actually these base classes had it's own implementation of same interface as the intrinsic objects it abstracts. It includes only virtual members which simply throws an exception. ASP.NET MVC also provides the corresponding wrapper classes (for example, HttpRequestWrapper) which provides a concrete implementation of the base classes in the form of ASP.NET intrinsic object. Other wrapper classes may be defined by third parties in the form of a mock object for testing purpose.           So we can say that a Request object in ASP.NET MVC may be HttpRequestWrapper or may be MockRequestWrapper(assuming that MockRequestWrapper class is used for testing purpose). Here is list of ASP.NET instrinsic and their implementation in ASP.NET MVC in the form of base and wrapper classes. Base Class Wrapper Class ASP.NET Intrinsic Object Description HttpApplicationStateBase HttpApplicationStateWrapper Application HttpApplicationStateBase abstracts the intrinsic Application object HttpBrowserCapabilitiesBase HttpBrowserCapabilitiesWrapper HttpBrowserCapabilities HttpBrowserCapabilitiesBase abstracts the HttpBrowserCapabilities class HttpCachePolicyBase HttpCachePolicyWrapper HttpCachePolicy HttpCachePolicyBase abstracts the HttpCachePolicy class HttpContextBase HttpContextWrapper HttpContext HttpContextBase abstracts the intrinsic HttpContext object HttpFileCollectionBase HttpFileCollectionWrapper HttpFileCollection HttpFileCollectionBase abstracts the HttpFileCollection class HttpPostedFileBase HttpPostedFileWrapper HttpPostedFile HttpPostedFileBase abstracts the HttpPostedFile class HttpRequestBase HttpRequestWrapper Request HttpRequestBase abstracts the intrinsic Request object HttpResponseBase HttpResponseWrapper Response HttpResponseBase abstracts the intrinsic Response object HttpServerUtilityBase HttpServerUtilityWrapper Server HttpServerUtilityBase abstracts the intrinsic Server object HttpSessionStateBase HttpSessionStateWrapper Session HttpSessionStateBase abstracts the intrinsic Session object HttpStaticObjectsCollectionBase HttpStaticObjectsCollectionWrapper HttpStaticObjectsCollection HttpStaticObjectsCollectionBase abstracts the HttpStaticObjectsCollection class      Summary:           ASP.NET MVC provides a set of abstract classes for ASP.NET instrinsic objects in the form of base classes, allowing someone to create their own implementation. In addition, ASP.NET MVC also provide set of concrete classes in the form of wrapper classes. This design really makes application easier to test and even application may replace concrete implementation with thier own implementation, which makes ASP.NET MVC very flexable.

    Read the article

  • CDN on Hosted Service in Windows Azure

    - by Shaun
    Yesterday I told Wang Tao, an annoying colleague sitting beside me, about how to make the static content enable the CDN in his website which had just been published on Windows Azure. The approach would be Move the static content, the images, CSS files, etc. into the blob storage. Enable the CDN on his storage account. Change the URL of those static files to the CDN URL. I think these are the very common steps when using CDN. But this morning I found that the new Windows Azure SDK 1.4 and new Windows Azure Developer Portal had just been published announced at the Windows Azure Blog. One of the new features in this release is about the CDN, which means we can enabled the CDN not only for a storage account, but a hosted service as well. Within this new feature the steps I mentioned above would be turned simpler a lot.   Enable CDN for Hosted Service To enable the CDN for a hosted service we just need to log on the Windows Azure Developer Portal. Under the “Hosted Services, Storage Accounts & CDN” item we will find a new menu on the left hand side said “CDN”, where we can manage the CDN for storage account and hosted service. As we can see the hosted services and storage accounts are all listed in my subscriptions. To enable a CDN for a hosted service is veru simple, just select a hosted service and click the New Endpoint button on top. In this dialog we can select the subscription and the storage account, or the hosted service we want the CDN to be enabled. If we selected the hosted service, like I did in the image above, the “Source URL for the CDN endpoint” will be shown automatically. This means the windows azure platform will make all contents under the “/cdn” folder as CDN enabled. But we cannot change the value at the moment. The following 3 checkboxes next to the URL are: Enable CDN: Enable or disable the CDN. HTTPS: If we need to use HTTPS connections check it. Query String: If we are caching content from a hosted service and we are using query strings to specify the content to be retrieved, check it. Just click the “Create” button to let the windows azure create the CDN for our hosted service. The CDN would be available within 60 minutes as Microsoft mentioned. My experience is that about 15 minutes the CDN could be used and we can find the CDN URL in the portal as well.   Put the Content in CDN in Hosted Service Let’s create a simple windows azure project in Visual Studio with a MVC 2 Web Role. When we created the CDN mentioned above the source URL of CDN endpoint would be under the “/cdn” folder. So in the Visual Studio we create a folder under the website named “cdn” and put some static files there. Then all these files would be cached by CDN if we use the CDN endpoint. The CDN of the hosted service can cache some kind of “dynamic” result with the Query String feature enabled. We create a controller named CdnController and a GetNumber action in it. The routed URL of this controller would be /Cdn/GetNumber which can be CDN-ed as well since the URL said it’s under the “/cdn” folder. In the GetNumber action we just put a number value which specified by parameter into the view model, then the URL could be like /Cdn/GetNumber?number=2. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace MvcWebRole1.Controllers 8: { 9: public class CdnController : Controller 10: { 11: // 12: // GET: /Cdn/ 13:  14: public ActionResult GetNumber(int number) 15: { 16: return View(number); 17: } 18:  19: } 20: } And we add a view to display the number which is super simple. 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<int>" %> 2:  3: <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> 4: GetNumber 5: </asp:Content> 6:  7: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 8:  9: <h2>The number is: <% 1: : Model.ToString() %></h2> 10:  11: </asp:Content> Since this action is under the CdnController the URL would be under the “/cdn” folder which means it can be CDN-ed. And since we checked the “Query String” the content of this dynamic page will be cached by its query string. So if I use the CDN URL, http://az25311.vo.msecnd.net/GetNumber?number=2, the CDN will firstly check if there’s any content cached with the key “GetNumber?number=2”. If yes then the CDN will return the content directly; otherwise it will connect to the hosted service, http://aurora-sys.cloudapp.net/Cdn/GetNumber?number=2, and then send the result back to the browser and cached in CDN. But to be notice that the query string are treated as string when used by the key of CDN element. This means the URLs below would be cached in 2 elements in CDN: http://az25311.vo.msecnd.net/GetNumber?number=2&page=1 http://az25311.vo.msecnd.net/GetNumber?page=1&number=2 The final step is to upload the project onto azure. Test the Hosted Service CDN After published the project on azure, we can use the CDN in the website. The CDN endpoint we had created is az25311.vo.msecnd.net so all files under the “/cdn” folder can be requested with it. Let’s have a try on the sample.htm and c_great_wall.jpg static files. Also we can request the dynamic page GetNumber with the query string with the CDN endpoint. And if we refresh this page it will be shown very quickly since the content comes from the CDN without MCV server side process. We style of this page was missing. This is because the CSS file was not includes in the “/cdn” folder so the page cannot retrieve the CSS file from the CDN URL.   Summary In this post I introduced the new feature in Windows Azure CDN with the release of Windows Azure SDK 1.4 and new Developer Portal. With the CDN of the Hosted Service we can just put the static resources under a “/cdn” folder so that the CDN can cache them automatically and no need to put then into the blob storage. Also it support caching the dynamic content with the Query String feature. So that we can cache some parts of the web page by using the UserController and CDN. For example we can cache the log on user control in the master page so that the log on part will be loaded super-fast. There are some other new features within this release you can find here. And for more detailed information about the Windows Azure CDN please have a look here as well.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Insufficient permissions when calling flickr.auth.oauth.checkToken

    - by Designer 17
    This is a follow up on another question I had asked on stackoverflow a day or so ago. I'm working on trying to call flickr.people.getPhotos... but no matter what I do I keep getting this... jsonFlickrApi({"stat":"fail", "code":99, "message":"Insufficient permissions. Method requires read privileges; none granted."}); but if you were to look at my "Apps You're Using" page (on flickr) you'd see this. So, even though I've authorized the max permissions... flickr says I don't have any granted!? I even used flickr.auth.oauth.checkToken to double check that my access token was right, this was the value returned; jsonFlickrApi({"oauth":{"token":{"_content":"my-access-token"}, "perms":{"_content":"delete"}, "user":{"nsid":"my-user-nsid", "username":"designerseventeen", "fullname":"Designer Seventeen"}}, "stat":"ok"}) Here's how I'm attempting to call flickr.people.getPhotos... <?php // Attempt to call flickr.people.getPhotos $method = "flickr.people.getPhotos"; $format = 'json'; $nsid = 'my-user-nsid'; $sig_string = "{$api_secret}api_key{$api_key}format{$format}method{$method}user_id{$nsid}"; $api_sig = md5( $sig_string ); $flickr_call = "http://api.flickr.com/services/rest/?"; $url = "method=" . $method; $url .= "&api_key=" . $api_key; $url .= "&user_id=" . $nsid; $url .= "&format=" . $format; $url .= "&api_sig=" . $api_sig; $url = $flickr_call . $url; $results = file_get_contents( $url ); $rsp_arr = explode( '&',$results ); print "<pre>"; print_r($rsp_arr); print "</pre>"; I am officially stumped... and in need of help. Thanks!

    Read the article

  • Let’s Get Social

    - by Kristin Rose
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} You can try to run from it like a bad Facebook picture but you can’t hide. Social media as we know it is quickly taking over our lives and is not going away any time soon. Though attempting to reach as many Twitter followers as Lady Gaga is daunting, learning how to leverage social media to meet your customer’s needs is not. For Oracle, this means interacting directly with our partners through our many social media outlets, and refraining from posting a mindless status on the pastrami on rye we ate for lunch today… though it was delicious. The “correct” way to go about social media is going to mean something different to each company. For example, sending a customer more than one friend request a day may not be the best way to get their attention, but using social media as a two-way marketing channel is. Oracle’s Partner Business Center’s (PBC) twitter handle was recently mentioned by Elateral as the “ideal way to engage with your market and use social media in the channel”. Why you ask? Because the PBC has two named social media leads manning the Twitter feed at all times, helping partners get the information and answers they need more quickly than a Justin Bieber video gone viral. So whether you want to post a video of your favorite customer attempting the Marshmallow challenge or tweet like there’s no tomorrow, be sure to follow @OraclePartnerBiz today, and see how they can help you achieve your next partner milestone with Oracle. Happy Socializing, The OPN Communications Team v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Watching Green Day and discovering Sitecore, priceless.

    - by jonel
    I’m feeling inspired and I’d like to share a technique we’ve implemented in Sitecore to address a URL mapping from our legacy site that we wanted to carry over to the new beautiful Littelfuse.com. The challenge is to carry over all of our series URLs that have been published in our datasheets, we currently have a lot of series and having to create a manual mapping for those could be really tedious. It has the format of http://www.littelfuse.com/series/series-name.html, for instance, http://www.littelfuse.com/series/flnr.html. It would have been easier if we have our information architecture defined like this but that would have been too easy. I took a solution that is 2-fold. First, I need to create a URL rewrite rule using the IIS URL Rewrite Module 2.0. Secondly, we need to implement a handler that will take care of the actual lookup of the actual series. It will be amazing after we’ve gone over the details. Let’s start with the URL rewrite. Create a new blank rule, you can name it with anything you wish. The key part here to talk about is the Pattern and the Action groups. The Pattern is nothing but regex. Basically, I’m telling it to match the regex I have defined. In the Action group, I am telling it what to do, in this case, rewrite to the redirect.aspx webform. In this implementation, I will be using Rewrite instead of redirect so the URL sticks in the browser. If you opt to use Redirect, then the URL bar will display the new URL our webform will redirect to. Let me explain one small thing, the (\w+) in my Pattern group’s regex, will actually translate to {R:1} in my Action’s group. This is where the magic begins. Now let’s see what our Redirect.aspx contains. Remember our {R:1} above which becomes the query string variable s? This are basic .Net code. The only thing that you will probably ask is the LFSearch class. It’s our own implementation of addressing finding items by using a field search, we supply the fieldname, the value of the field, the template name of the item we are after, and the value of true or false if we want to do an exact search, or not. If eureka, then redirect to that item’s Path (Url). If not, tell the user tough luck, here’s the 404 page as a consolation. Amazing, ain’t it?

    Read the article

  • Javascript SDK on Facebook

    - by Eamonn Fox
    I am trying to use the Javascript SDK for Facebook but I keep getting the message : Given URL is not permitted by the application configuration.: One or more of the given URLs is not allowed by the App's settings. It must match the Website URL or Canvas URL, or the domain must be a subdomain of one of the App's domains but I have copied and pasted my canvas URL from the settings section. Anyone any ideas whats up?

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • robots.txt not updated

    - by Haridharan
    I have updated some url's and files in robots.txt file to block url's and files from google search results but, still files displaying in search results. As per a suggestion from a site I tried to update the robots.txt by below steps. In Google Webmaster tools, Health - Fetch as Google - type the url and click the fetch button. but, still files displaying in search results. Note: in Google Webmaster tools, Health - Blocked URL's - robots.txt file - downloaded date looks two dates back.

    Read the article

  • how to include tags in permalinks of wordpress

    - by babua
    while using custom structure in my wordpress url , when i am trying to include tags, it shows me , it is showing errors , but when i add category , it reflects in url. i want that the tag gets included in custom url structure automatically , how can it be done using wordpress ... please help ... when i am adding /%tag%/ to custom structure field in wordpress admin , the url shows not found message.

    Read the article

  • How to display a JSON error message?

    - by Tiny Giant Studios
    I'm currently developing a tumblr theme and have built a jQuery JSON thingamabob that uses the Tumblr API to do the following: The user would click on the "post type" link (e.g. Video Posts), at which stage jQuery would use JSON to grab all the posts that's related to that type and then dynamically display them in a designated area. Now everything works absolutely peachy, except that with Tumblr being Tumblr and their servers taking a knock every now and then, the Tumblr API thingy is sometimes offline. Now I can't foresee when this function will be down, which is why I want to display some generic error message if JSON (for whatever reason) was unable to load the post. You'll see I've already written some code to show an error message when jQuery can't find any posts related to that post type BUT it doesn't cover any server errors. Note: I sometimes get this error: Failed to load resource: the server responded with a status of 503 (Service Temporarily Unavailable) It is for this 503 Error message that I need to write some code, but I'm slightly clueless :) Here's the jQuery JSON code: $('ul.right li').find('a').click(function() { var postType = this.className; var count = 0; byCategory(postType); return false; function byCategory(postType, callback) { $.getJSON('{URL}/api/read/json?type=' + postType + '&callback=?', function(data) { var article = []; $.each(data.posts, function(i, item) { // i = index // item = data for a particular post switch(item.type) { case 'photo': article[i] = '<div class="post_wrap"><div class="photo" style="padding-bottom:5px;">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/XSTldh6ds/photo_icon.png" alt="type_icon"/></a>' + '<a href="' + item.url + '" title="{Title}"><img src="' + item['photo-url-500'] + '"alt="image" /></a></div></div>'; count = 1; break; case 'video': article[i] = '<div class="post_wrap"><div class="video" style="padding-bottom:5px;">' + '<a href="' + item.url + '" title="{Title}" class="type_icon">' + '<img src="http://static.tumblr.com/ewjv7ap/nuSldhclv/video_icon.png" alt="type_icon"/></a>' + '<span style="margin: auto;">' + item['video-player'] + '</span>' + '</div></div>'; count = 1; break; case 'audio': if (use_IE == true) { article[i] = '<div class="post_wrap"><div class="regular">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/R50ldh5uj/audio_icon.png" alt="type_icon"/></a>' + '<h3><a href="' + item.url + '">' + item['id3-artist'] +' - ' + item['id3-title'] + '</a></h3>' + '</div></div>'; } else { article[i] = '<div class="post_wrap"><div class="regular">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/R50ldh5uj/audio_icon.png" alt="type_icon"/></a>' + '<h3><a href="' + item.url + '">' + item['id3-artist'] +' - ' + item['id3-title'] + '</a></h3><div class="player">' + item['audio-player'] + '</div>' + '</div></div>'; }; count = 1; break; case 'regular': article[i] = '<div class="post_wrap"><div class="regular">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/dwxldhck1/regular_icon.png" alt="type_icon"/></a><h3><a href="' + item.url + '">' + item['regular-title'] + '</a></h3><div class="description_container">' + item['regular-body'] + '</div></div></div>'; count = 1; break; case 'quote': article[i] = '<div class="post_wrap"><div class="quote">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/loEldhcpr/quote_icon.png" alt="type_icon"/></a><blockquote><h3><a href="' + item.url + '" title="{Title}">' + item['quote-text'] + '</a></h3></blockquote><cite>- ' + item['quote-source'] + '</cite></div></div>'; count = 1; break; case 'conversation': article[i] = '<div class="post_wrap"><div class="chat">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/MVuldhcth/conversation_icon.png" alt="type_icon"/></a><h3><a href="' + item.url + '">' + item['conversation-title'] + '</a></h3></div></div>'; count = 1; break; case 'link': article[i] = '<div class="post_wrap"><div class="link">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/EQGldhc30/link_icon.png" alt="type_icon"/></a><h3><a href="' + item['link-url'] + '" target="_blank">' + item['link-text'] + '</a></h3></div></div>'; count = 1; break; default: alert('No Entries Found.'); }; }) // end each if (!(count == 0)) { $('#content_right') .hide('fast') .html('<div class="first_div"><span class="left_corner"></span><span class="right_corner"></span><h2>Displaying ' + postType + ' Posts Only</h2></div>' + article.join('')) .slideDown('fast') } else { $('#content_right') .hide('fast') .html('<div class="first_div"><span class="left_corner"></span><span class="right_corner"></span><h2>Hmmm, currently there are no ' + postType + ' posts to display</h2></div>') .slideDown('fast') } // end getJSON }); // end byCategory } }); If you'd like to see the demo in action, check out Elegantem but do note that everything might work absolutely fine for you (or not), depending on Tumblr's temperament.

    Read the article

  • ASP.NET Web Forms Extensibility: Providers

    - by Ricardo Peres
    Introduction This will be the first of a number of posts on ASP.NET extensibility. At this moment I don’t know exactly how many will be and I only know a couple of subjects that I want to talk about, so more will come in the next days. I have the sensation that the providers offered by ASP.NET are not widely know, although everyone uses, for example, sessions, they may not be aware of the extensibility points that Microsoft included. This post won’t go into details of how to configure and extend each of the providers, but will hopefully give some pointers on that direction. Canonical These are the most widely known and used providers, coming from ASP.NET 1, chances are, you have used them already. Good support for invoking client side, either from a .NET application or from JavaScript. Lots of server-side controls use them, such as the Login control for example. Membership The Membership provider is responsible for managing registered users, including creating new ones, authenticating them, changing passwords, etc. ASP.NET comes with two implementations, one that uses a SQL Server database and another that uses the Active Directory. The base class is Membership and new providers are registered on the membership section on the Web.config file, as well as parameters for specifying minimum password lengths, complexities, maximum age, etc. One reason for creating a custom provider would be, for example, storing membership information in a different database engine. 1: <membership defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </membership> Role The Role provider assigns roles to authenticated users. The base class is Role and there are three out of the box implementations: XML-based, SQL Server and Windows-based. Also registered on Web.config through the roleManager section, where you can also say if your roles should be cached on a cookie. If you want your roles to come from a different place, implement a custom provider. 1: <roleManager defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly" /> 4: </providers> 5: </roleManager> Profile The Profile provider allows defining a set of properties that will be tied and made available to authenticated or even anonymous ones, which must be tracked by using anonymous authentication. The base class is Profile and the only included implementation stores these settings in a SQL Server database. Configured through profile section, where you also specify the properties to make available, a custom provider would allow storing these properties in different locations. 1: <profile defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </profile> Basic OK, I didn’t know what to call these, so Basic is probably as good as a name as anything else. Not supported client-side (doesn’t even make sense). Session The Session provider allows storing data tied to the current “session”, which is normally created when a user first accesses the site, even when it is not yet authenticated, and remains all the way. The base class and only included implementation is SessionStateStoreProviderBase and it is capable of storing data in one of three locations: In the process memory (default, not suitable for web farms or increased reliability); A SQL Server database (best for reliability and clustering); The ASP.NET State Service, which is a Windows Service that is installed with the .NET Framework (ok for clustering). The configuration is made through the sessionState section. By adding a custom Session provider, you can store the data in different locations – think for example of a distributed cache. 1: <sessionState customProvider=”MyProvider”> 2: <providers> 3: <add name=”MyProvider” type=”MyClass, MyAssembly” /> 4: </providers> 5: </sessionState> Resource A not so known provider, allows you to change the origin of localized resource elements. By default, these come from RESX files and are used whenever you use the Resources expression builder or the GetGlobalResourceObject and GetLocalResourceObject methods, but if you implement a custom provider, you can have these elements come from some place else, such as a database. The base class is ResourceProviderFactory and there’s only one internal implementation which uses these RESX files. Configuration is through the globalization section. 1: <globalization resourceProviderFactoryType="MyClass, MyAssembly" /> Health Monitoring Health Monitoring is also probably not so well known, and actually not a good name for it. First, in order to understand what it does, you have to know that ASP.NET fires “events” at specific times and when specific things happen, such as when logging in, an exception is raised. These are not user interface events and you can create your own and fire them, nothing will happen, but the Health Monitoring provider will detect it. You can configure it to do things when certain conditions are met, such as a number of events being fired in a certain amount of time. You define these rules and route them to a specific provider, which must inherit from WebEventProvider. Out of the box implementations include sending mails, logging to a SQL Server database, writing to the Windows Event Log, Windows Management Instrumentation, the IIS 7 Trace infrastructure or the debugger Trace. Its configuration is achieved by the healthMonitoring section and a reason for implementing a custom provider would be, for example, locking down a web application in the event of a significant number of failed login attempts occurring in a small period of time. 1: <healthMonitoring> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </healthMonitoring> Sitemap The Sitemap provider allows defining the site’s navigation structure and associated required permissions for each node, in a tree-like fashion. Usually this is statically defined, and the included provider allows it, by supplying this structure in a Web.sitemap XML file. The base class is SiteMapProvider and you can extend it in order to supply you own source for the site’s structure, which may even be dynamic. Its configuration must be done through the siteMap section. 1: <siteMap defaultProvider="MyProvider"> 2: <providers><add name="MyProvider" type="MyClass, MyAssembly" /> 3: </providers> 4: </siteMap> Web Part Personalization Web Parts are better known by SharePoint users, but since ASP.NET 2.0 they are included in the core Framework. Web Parts are server-side controls that offer certain possibilities of configuration by clients visiting the page where they are located. The infrastructure handles this configuration per user or globally for all users and this provider is responsible for just that. The base class is PersonalizationProvider and the only included implementation stores settings on SQL Server. Add new providers through the personalization section. 1: <webParts> 2: <personalization defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </personalization> 7: </webParts> Build The Build provider is responsible for compiling whatever files are present on your web folder. There’s a base class, BuildProvider, and, as can be expected, internal implementations for building pages (ASPX), master pages (Master), user web controls (ASCX), handlers (ASHX), themes (Skin), XML Schemas (XSD), web services (ASMX, SVC), resources (RESX), browser capabilities files (Browser) and so on. You would write a build provider if you wanted to generate code from any kind of non-code file so that you have strong typing at development time. Configuration goes on the buildProviders section and it is per extension. 1: <buildProviders> 2: <add extension=".ext" type="MyClass, MyAssembly” /> 3: </buildProviders> New in ASP.NET 4 Not exactly new since they exist since 2010, but in ASP.NET terms, still new. Output Cache The Output Cache for ASPX pages and ASCX user controls is now extensible, through the Output Cache provider, which means you can implement a custom mechanism for storing and retrieving cached data, for example, in a distributed fashion. The base class is OutputCacheProvider and the only implementation is private. Configuration goes on the outputCache section and on each page and web user control you can choose the provider you want to use. 1: <caching> 2: <outputCache defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </outputCache> 7: </caching> Request Validation A big change introduced in ASP.NET 4 (and refined in 4.5, by the way) is the introduction of extensible request validation, by means of a Request Validation provider. This means we are not limited to either enabling or disabling event validation for all pages or for a specific page, but we now have fine control over each of the elements of the request, including cookies, headers, query string and form values. The base provider class is RequestValidator and the configuration goes on the httpRuntime section. 1: <httpRuntime requestValidationType="MyClass, MyAssembly" /> Browser Capabilities The Browser Capabilities provider is new in ASP.NET 4, although the concept exists from ASP.NET 2. The idea is to map a browser brand and version to its supported capabilities, such as JavaScript version, Flash support, ActiveX support, and so on. Previously, this was all hardcoded in .Browser files located in %WINDIR%\Microsoft.NET\Framework(64)\vXXXXX\Config\Browsers, but now you can have a class inherit from HttpCapabilitiesProvider and implement your own mechanism. Register in on the browserCaps section. 1: <browserCaps provider="MyClass, MyAssembly" /> Encoder The Encoder provider is responsible for encoding every string that is sent to the browser on a page or header. This includes for example converting special characters for their standard codes and is implemented by the base class HttpEncoder. Another implementation takes care of Anti Cross Site Scripting (XSS) attacks. Build your own by inheriting from one of these classes if you want to add some additional processing to these strings. The configuration will go on the httpRuntime section. 1: <httpRuntime encoderType="MyClass, MyAssembly" /> Conclusion That’s about it for ASP.NET providers. It was by no means a thorough description, but I hope I managed to raise your interest on this subject. There are lots of pointers on the Internet, so I only included direct references to the Framework classes and configuration sections. Stay tuned for more extensibility!

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >