Search Results

Search found 14313 results on 573 pages for 'private clouds'.

Page 192/573 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • Connect to running web role on Azure using Remote Desktop Connection and VS2012

    - by Magnus Karlsson
    We want to be able to collect IntelliTrace information from our running app and also use remote desktop to connect to the IIS and look around(probably debugging). 1. Create certificate 1.1 Right-click the cloud project (marked in red) and select “Configure remote desktop”. 1.2 In the drop down list of certificates, choose <create> at the bottom. 1.3. Follow the instructions, you can set it up with default values. 1.4 When done. Choose the certificate and click “Copy to File…” as seen in the left of the picture above. 1.5. Save the file with any name you want. Now we will save it to local storage to be able to import it to our solution through the azure configuration manager in step 3. 2. Save certificate to local storage Now we need to attach it to our local certificate storage to be able to reach it from our confiuguration manager in visual studio. Microsoft provides the following steps for doing this: http://support.microsoft.com/kb/232137 In order to view the Certificates store on the local computer, perform the following steps: Click Start, and then click Run. Type "MMC.EXE" (without the quotation marks) and click OK. Click Console in the new MMC you created, and then click Add/Remove Snap-in. In the new window, click Add. Highlight the Certificates snap-in, and then click Add. Choose the Computer option and click Next. Select Local Computer on the next screen, and then click OK. Click Close , and then click OK. You have now added the Certificates snap-in, which will allow you to work with any certificates in your computer's certificate store. You may want to save this MMC for later use. Now that you have access to the Certificates snap-in, you can import the server certificate into you computer's certificate store by following these steps: Open the Certificates (Local Computer) snap-in and navigate to Personal, and then Certificates. Note: Certificates may not be listed. If it is not, that is because there are no certificates installed. Right-click Certificates (or Personal if that option does not exist.) Choose All Tasks, and then click Import. When the wizard starts, click Next. Browse to the PFX file you created containing your server certificate and private key. Click Next. Enter the password you gave the PFX file when you created it. Be sure the Mark the key as exportable option is selected if you want to be able to export the key pair again from this computer. As an added security measure, you may want to leave this option unchecked to ensure that no one can make a backup of your private key. Click Next, and then choose the Certificate Store you want to save the certificate to. You should select Personal because it is a Web server certificate. If you included the certificates in the certification hierarchy, it will also be added to this store. Click Next. You should see a summary of screen showing what the wizard is about to do. If this information is correct, click Finish. You will now see the server certificate for your Web server in the list of Personal Certificates. It will be denoted by the common name of the server (found in the subject section of the certificate). Now that you have the certificate backup imported into the certificate store, you can enable Internet Information Services 5.0 to use that certificate (and the corresponding private key). To do this, perform the following steps: Open the Internet Services Manager (under Administrative Tools) and navigate to the Web site you want to enable secure communications (SSL/TLS) on. Right-click on the site and click Properties. You should now see the properties screen for the Web site. Click the Directory Security tab. Under the Secure Communications section, click Server Certificate. This will start the Web Site Certificate Wizard. Click Next. Choose the Assign an existing certificate option and click Next. You will now see a screen showing that contents of your computer's personal certificate store. Highlight your Web server certificate (denoted by the common name), and then click Next. You will now see a summary screen showing you all the details about the certificate you are installing. Be sure that this information is correct or you may have problems using SSL or TLS in HTTP communications. Click Next, and then click OK to exit the wizard. You should now have an SSL/TLS-enabled Web server. Be sure to protect your PFX files from any unwanted personnel. Image of a typical MMC.EXE with the certificates up.   3. Import the certificate to you visual studio project. 3.1 Now right click your equivalent to the MvcWebRole1 (as seen in the first picture under the red oval) and choose properties. 3.2 Choose Certificates. Right click the ellipsis to the right of the “thumbprint” and you should be able to select your newly created certificate here. After selecting it- save the file.   4. Upload the certificate to your Azure subscription. 4.1 Go to the azure management portal, click the services menu icon to the left and choose the service. Click Upload in the bottom menu.     5. Connect to server. Since I tried to use account settings(have to use another name) we have to set up a new name for the connection. No biggie. 5.1 Go to azure management portal, select your service and in the bottom menu, choose “REMOTE”. This will display the configuration for remote connection. It will actually change your ServiceConfiguration.cscfg file. After you change It here it might be good to choose download and replace the one in your project. Set a name that is not your windows azure account name and not Administrator. 5.2 Goto visual studio, click Server Explorer. Choose as selected in the picture below and click “COnnect using remote desktop”.   5.2 You will now be able to log in with the name and password set up in step 5.1. and voila! Windows server 2012, IIS and other nice stuff!   To do this one I’ve been using http://msdn.microsoft.com/en-us/library/windowsazure/ff683671.aspx where you can collect some of this information and additional one.

    Read the article

  • List of usage information to collect in a web application

    - by Thomas Levine
    I'm writing a web application that will allow people to create accounts, edit stuff, send stuff to people, &c. I plan on recording things like when things were created and sent and stuff. Is there a list of usage information that one should collect in a web application? I'd like to see whether I'm missing something. Also, is there a list of usage information that I shouldn't collect (Like maybe information that people find private)?

    Read the article

  • pingback / trackback support for a photo sharing website?

    - by cboettig
    Is there a photo sharing service, such as flickr or picasa, that will collect the urls of the locations where the photo has been posted on other blogs (or mentioned in tweets, etc?) This could be accomplished by posting each photo as a blog entry using wordpress, which would then automatically handle pingbacks, but of course a blog doesn't perform quite like a proper photo service. Perhaps this could be done with a private photo hosting server like zenphoto by editing the php, but that seems rather involved. Does such a service already exist?

    Read the article

  • The "FAQ on the correct forum to post at is: " - suggestions and thanks

    At http://forums.asp.net/p/1337412/2699239.aspx#2699239, there is the "FAQ on the correct forum to post at is:".If you have anysuggestions for new links,updates to existing links,notification that a link has gone bador just thanks!please post either to this thread or send me a private message....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • LINQ: Enhancing Distinct With The SelectorEqualityComparer

    - by Paulo Morgado
    On my last post, I introduced the PredicateEqualityComparer and a Distinct extension method that receives a predicate to internally create a PredicateEqualityComparer to filter elements. Using the predicate, greatly improves readability, conciseness and expressiveness of the queries, but it can be even better. Most of the times, we don’t want to provide a comparison method but just to extract the comaprison key for the elements. So, I developed a SelectorEqualityComparer that takes a method that extracts the key value for each element. Something like this: public class SelectorEqualityComparer<TSource, Tkey> : EqualityComparer<TSource> where Tkey : IEquatable<Tkey> { private Func<TSource, Tkey> selector; public SelectorEqualityComparer(Func<TSource, Tkey> selector) : base() { this.selector = selector; } public override bool Equals(TSource x, TSource y) { Tkey xKey = this.GetKey(x); Tkey yKey = this.GetKey(y); if (xKey != null) { return ((yKey != null) && xKey.Equals(yKey)); } return (yKey == null); } public override int GetHashCode(TSource obj) { Tkey key = this.GetKey(obj); return (key == null) ? 0 : key.GetHashCode(); } public override bool Equals(object obj) { SelectorEqualityComparer<TSource, Tkey> comparer = obj as SelectorEqualityComparer<TSource, Tkey>; return (comparer != null); } public override int GetHashCode() { return base.GetType().Name.GetHashCode(); } private Tkey GetKey(TSource obj) { return (obj == null) ? (Tkey)(object)null : this.selector(obj); } } Now I can write code like this: .Distinct(new SelectorEqualityComparer<Source, Key>(x => x.Field)) And, for improved readability, conciseness and expressiveness and support for anonymous types the corresponding Distinct extension method: public static IEnumerable<TSource> Distinct<TSource, TKey>(this IEnumerable<TSource> source, Func<TSource, TKey> selector) where TKey : IEquatable<TKey> { return source.Distinct(new SelectorEqualityComparer<TSource, TKey>(selector)); } And the query is now written like this: .Distinct(x => x.Field) For most usages, it’s simpler than using a predicate.

    Read the article

  • The Dispose Pattern (and FxCop warnings)

    - by Scott Dorman
    [This is actually a response to Bill’s blog post, but since it isn’t possible to leave this as a comment on his blog it’s a post here.] There are many different ways to implement the Dispose pattern correctly. Some are (in my opinion) better than others. In Bill’s blog post he presents a particular pattern, which is an excerpt from his book (Effective C#). The issue centers around the fact that a reader took the code sample presented in the book and ran FxCop (Code Analysis) on it, which generated a warning: “Ensure that base.Dispose() is always called.” The “lesson learned” that Bill presents is that “tools are there to help us, not control us.” While I completely agree with the belief that tools are there to help us, I think it’s important to understand why FxCop is raising this particular warning. The code presented in Bill’s book looks like: // Have its own disposed flag.private bool disposed = false;protected override void Dispose(bool isDisposing){ // Don't dispose more than once. if (disposed) return; if (isDisposing) { // TODO: free managed resources here. } // TODO: free unmanaged resources here. // Let the base class free its resources. // Base class is responsible for calling // GC.SuppressFinalize( ) base.Dispose(isDisposing); // Set derived class disposed flag: disposed = true;} This code does follow all of the guidelines for implementing the Dispose pattern. In this case, it’s presumably part of a larger example showing how to implement the pattern as part of a base class. The reason FxCop is warning you about this code is the first if statement in the Dispose method, which will cause the method to exit if disposed is true. The problem here is that there is the possibility that if the disposed flag is true, the call to base.Dispose() will never be executed. As Bill points out, it is possible for some other code elsewhere in the class to set this flag. He states that this is an “unlikely occurrence.” While that is probably true, it can be a potentially dangerous assumption to make and is one that can be easily corrected. By changing the code slightly you can remove this assumption and correct the FxCop violation. private bool disposed = false;protected override void Dispose(bool disposing){ if (!disposed) { if (disposing) { // Dispose managed resources. } // Dispose unmanaged resources. disposed = true; } base.Dispose(disposing);} Using this implementation allows the call to base.Dispose() to always occur, which ensures that the the disposal chain is always properly followed. Technorati Tags: .NET,C#,Dispose Pattern

    Read the article

  • Should adapters or wrappers be unit tested?

    - by m3th0dman
    Suppose that I have a class that implements some logic: public MyLogicImpl implements MyLogic { public void myLogicMethod() { //my logic here } } and somewhere else a test class: public MyLogicImplTest { @Test public void testMyLogicMethod() { /test my logic } } I also have: @WebService public MyWebServices class { @Inject private MyLogic myLogic; @WebMethod public void myLogicWebMethod() { myLogic.myLogicMethod(); } } Should there be a test unit for myLogicWebMethod or should the testing for it be handled in integration testing.

    Read the article

  • Use a Fake Http Channel to Unit Test with HttpClient

    - by Steve Michelotti
    Applications get data from lots of different sources. The most common is to get data from a database or a web service. Typically, we encapsulate calls to a database in a Repository object and we create some sort of IRepository interface as an abstraction to decouple between layers and enable easier unit testing by leveraging faking and mocking. This works great for database interaction. However, when consuming a RESTful web service, this is is not always the best approach. The WCF Web APIs that are available on CodePlex (current drop is Preview 3) provide a variety of features to make building HTTP REST services more robust. When you download the latest bits, you’ll also find a new HttpClient which has been updated for .NET 4.0 as compared to the one that shipped for 3.5 in the original REST Starter Kit. The HttpClient currently provides the best API for consuming REST services on the .NET platform and the WCF Web APIs provide a number of extension methods which extend HttpClient and make it even easier to use. Let’s say you have a client application that is consuming an HTTP service – this could be Silverlight, WPF, or any UI technology but for my example I’ll use an MVC application: 1: using System; 2: using System.Net.Http; 3: using System.Web.Mvc; 4: using FakeChannelExample.Models; 5: using Microsoft.Runtime.Serialization; 6:   7: namespace FakeChannelExample.Controllers 8: { 9: public class HomeController : Controller 10: { 11: private readonly HttpClient httpClient; 12:   13: public HomeController(HttpClient httpClient) 14: { 15: this.httpClient = httpClient; 16: } 17:   18: public ActionResult Index() 19: { 20: var response = httpClient.Get("Person(1)"); 21: var person = response.Content.ReadAsDataContract<Person>(); 22:   23: this.ViewBag.Message = person.FirstName + " " + person.LastName; 24: 25: return View(); 26: } 27: } 28: } On line #20 of the code above you can see I’m performing an HTTP GET request to a Person resource exposed by an HTTP service. On line #21, I use the ReadAsDataContract() extension method provided by the WCF Web APIs to serialize to a Person object. In this example, the HttpClient is being passed into the constructor by MVC’s dependency resolver – in this case, I’m using StructureMap as an IoC and my StructureMap initialization code looks like this: 1: using StructureMap; 2: using System.Net.Http; 3:   4: namespace FakeChannelExample 5: { 6: public static class IoC 7: { 8: public static IContainer Initialize() 9: { 10: ObjectFactory.Initialize(x => 11: { 12: x.For<HttpClient>().Use(() => new HttpClient("http://localhost:31614/")); 13: }); 14: return ObjectFactory.Container; 15: } 16: } 17: } My controller code currently depends on a concrete instance of the HttpClient. Now I *could* create some sort of interface and wrap the HttpClient in this interface and use that object inside my controller instead – however, there are a few why reasons that is not desirable: For one thing, the API provided by the HttpClient provides nice features for dealing with HTTP services. I don’t really *want* these to look like C# RPC method calls – when HTTP services have REST features, I may want to inspect HTTP response headers and hypermedia contained within the message so that I can make intelligent decisions as to what to do next in my workflow (although I don’t happen to be doing these things in my example above) – this type of workflow is common in hypermedia REST scenarios. If I just encapsulate HttpClient behind some IRepository interface and make it look like a C# RPC method call, it will become difficult to take advantage of these types of things. Second, it could get pretty mind-numbing to have to create interfaces all over the place just to wrap the HttpClient. Then you’re probably going to have to hard-code HTTP knowledge into your code to formulate requests rather than just “following the links” that the hypermedia in a message might provide. Third, at first glance it might appear that we need to create an interface to facilitate unit testing, but actually it’s unnecessary. Even though the code above is dependent on a concrete type, it’s actually very easy to fake the data in a unit test. The HttpClient provides a Channel property (of type HttpMessageChannel) which allows you to create a fake message channel which can be leveraged in unit testing. In this case, what I want is to be able to write a unit test that just returns fake data. I also want this to be as re-usable as possible for my unit testing. I want to be able to write a unit test that looks like this: 1: [TestClass] 2: public class HomeControllerTest 3: { 4: [TestMethod] 5: public void Index() 6: { 7: // Arrange 8: var httpClient = new HttpClient("http://foo.com"); 9: httpClient.Channel = new FakeHttpChannel<Person>(new Person { FirstName = "Joe", LastName = "Blow" }); 10:   11: HomeController controller = new HomeController(httpClient); 12:   13: // Act 14: ViewResult result = controller.Index() as ViewResult; 15:   16: // Assert 17: Assert.AreEqual("Joe Blow", result.ViewBag.Message); 18: } 19: } Notice on line #9, I’m setting the Channel property of the HttpClient to be a fake channel. I’m also specifying the fake object that I want to be in the response on my “fake” Http request. I don’t need to rely on any mocking frameworks to do this. All I need is my FakeHttpChannel. The code to do this is not complex: 1: using System; 2: using System.IO; 3: using System.Net.Http; 4: using System.Runtime.Serialization; 5: using System.Threading; 6: using FakeChannelExample.Models; 7:   8: namespace FakeChannelExample.Tests 9: { 10: public class FakeHttpChannel<T> : HttpClientChannel 11: { 12: private T responseObject; 13:   14: public FakeHttpChannel(T responseObject) 15: { 16: this.responseObject = responseObject; 17: } 18:   19: protected override HttpResponseMessage Send(HttpRequestMessage request, CancellationToken cancellationToken) 20: { 21: return new HttpResponseMessage() 22: { 23: RequestMessage = request, 24: Content = new StreamContent(this.GetContentStream()) 25: }; 26: } 27:   28: private Stream GetContentStream() 29: { 30: var serializer = new DataContractSerializer(typeof(T)); 31: Stream stream = new MemoryStream(); 32: serializer.WriteObject(stream, this.responseObject); 33: stream.Position = 0; 34: return stream; 35: } 36: } 37: } The HttpClientChannel provides a Send() method which you can override to return any HttpResponseMessage that you want. You can see I’m using the DataContractSerializer to serialize the object and write it to a stream. That’s all you need to do. In the example above, the only thing I’ve chosen to do is to provide a way to return different response objects. But there are many more features you could add to your own re-usable FakeHttpChannel. For example, you might want to provide the ability to add HTTP headers to the message. You might want to use a different serializer other than the DataContractSerializer. You might want to provide custom hypermedia in the response as well as just an object or set HTTP response codes. This list goes on. This is the just one example of the really cool features being added to the next version of WCF to enable various HTTP scenarios. The code sample for this post can be downloaded here.

    Read the article

  • Maven Integrated View for NetBeans IDE

    - by Geertjan
    Started working on an oft-heard request from Kirk Pepperdine for an integrated view for multimodule builds for Maven projects in NetBeans IDE, as explained here. I suddenly had some kind of brainwave and solved all the remaining problems I had, by delegating to the LogicalViewProvider's node, instead of the project's node, which means I inherit all the icons, actions, package nodes, and anything else that was originally defined within the original project, in this case for the open source JAnnocessor project: Above, you can see that the Maven submodules can either be edited in-line, i.e., within the parent project, or separately, by opening them in the traditional NetBeans way. Get the module here: http://plugins.netbeans.org/plugin/45180/?show=true Some people out there might be interested in how this is achieved. First, hide the original ModulesNodeFactory in the layer. Then create the following class, which creates what you see in the screenshot above: import java.util.ArrayList; import java.util.List; import javax.swing.event.ChangeListener; import org.netbeans.api.project.Project; import org.netbeans.spi.project.SubprojectProvider; import org.netbeans.spi.project.ui.LogicalViewProvider; import org.netbeans.spi.project.ui.support.NodeFactory; import org.netbeans.spi.project.ui.support.NodeList; import org.openide.nodes.FilterNode; import org.openide.nodes.Node; @NodeFactory.Registration(projectType = "org-netbeans-modules-maven", position = 400) public class ModulesNodeFactory2 implements NodeFactory { @Override public NodeList<?> createNodes(Project prjct) { return new MavenModulesNodeList(prjct); } private class MavenModulesNodeList implements NodeList<Project> { private final Project project; public MavenModulesNodeList(Project prjct) { this.project = prjct; } @Override public List<Project> keys() { return new ArrayList<Project>( project.getLookup(). lookup(SubprojectProvider.class).getSubprojects()); } @Override public Node node(final Project project) { Node node = project.getLookup().lookup(LogicalViewProvider.class).createLogicalView(); return new FilterNode(node, new FilterNode.Children(node)); } @Override public void addChangeListener(ChangeListener cl) { } @Override public void removeChangeListener(ChangeListener cl) { } @Override public void addNotify() { } @Override public void removeNotify() { } } } Considering that there's only about 5 actual statements above, it's pretty amazing how much can be achieved with so little code. The NetBeans APIs really are very cool. Hope you like it, Kirk!

    Read the article

  • MVC Portable Area Modules *Without* MasterPages

    - by Steve Michelotti
    Portable Areas from MvcContrib provide a great way to build modular and composite applications on top of MVC. In short, portable areas provide a way to distribute MVC binary components as simple .NET assemblies where the aspx/ascx files are actually compiled into the assembly as embedded resources. I’ve blogged about Portable Areas in the past including this post here which talks about embedding resources and you can read more of an intro to Portable Areas here. As great as Portable Areas are, the question that seems to come up the most is: what about MasterPages? MasterPages seems to be the one thing that doesn’t work elegantly with portable areas because you specify the MasterPage in the @Page directive and it won’t use the same mechanism of the view engine so you can’t just embed them as resources. This means that you end up referencing a MasterPage that exists in the host application but not in your portable area. If you name the ContentPlaceHolderId’s correctly, it will work – but it all seems a little fragile. Ultimately, what I want is to be able to build a portable area as a module which has no knowledge of the host application. I want to be able to invoke the module by a full route on the user’s browser and it gets invoked and “automatically appears” inside the application’s visual chrome just like a MasterPage. So how could we accomplish this with portable areas? With this question in mind, I looked around at what other people are doing to address similar problems. Specifically, I immediately looked at how the Orchard team is handling this and I found it very compelling. Basically Orchard has its own custom layout/theme framework (utilizing a custom view engine) that allows you to build your module without any regard to the host. You simply decorate your controller with the [Themed] attribute and it will render with the outer chrome around it: 1: [Themed] 2: public class HomeController : Controller Here is the slide from the Orchard talk at this year MIX conference which shows how it conceptually works:   It’s pretty cool stuff.  So I figure, it must not be too difficult to incorporate this into the portable areas view engine as an optional piece of functionality. In fact, I’ll even simplify it a little – rather than have 1) Document.aspx, 2) Layout.ascx, and 3) <view>.ascx (as shown in the picture above); I’ll just have the outer page be “Chrome.aspx” and then the specific view in question. The Chrome.aspx not only takes the place of the MasterPage, but now since we’re no longer constrained by the MasterPage infrastructure, we have the choice of the Chrome.aspx living in the host or inside the portable areas as another embedded resource! Disclaimer: credit where credit is due – much of the code from this post is me re-purposing the Orchard code to suit my needs. To avoid confusion with Orchard, I’m going to refer to my implementation (which will be based on theirs) as a Chrome rather than a Theme. The first step I’ll take is to create a ChromedAttribute which adds a flag to the current HttpContext to indicate that the controller designated Chromed like this: 1: [Chromed] 2: public class HomeController : Controller The attribute itself is an MVC ActionFilter attribute: 1: public class ChromedAttribute : ActionFilterAttribute 2: { 3: public override void OnActionExecuting(ActionExecutingContext filterContext) 4: { 5: var chromedAttribute = GetChromedAttribute(filterContext.ActionDescriptor); 6: if (chromedAttribute != null) 7: { 8: filterContext.HttpContext.Items[typeof(ChromedAttribute)] = null; 9: } 10: } 11:   12: public static bool IsApplied(RequestContext context) 13: { 14: return context.HttpContext.Items.Contains(typeof(ChromedAttribute)); 15: } 16:   17: private static ChromedAttribute GetChromedAttribute(ActionDescriptor descriptor) 18: { 19: return descriptor.GetCustomAttributes(typeof(ChromedAttribute), true) 20: .Concat(descriptor.ControllerDescriptor.GetCustomAttributes(typeof(ChromedAttribute), true)) 21: .OfType<ChromedAttribute>() 22: .FirstOrDefault(); 23: } 24: } With that in place, we only have to override the FindView() method of the custom view engine with these 6 lines of code: 1: public override ViewEngineResult FindView(ControllerContext controllerContext, string viewName, string masterName, bool useCache) 2: { 3: if (ChromedAttribute.IsApplied(controllerContext.RequestContext)) 4: { 5: var bodyView = ViewEngines.Engines.FindPartialView(controllerContext, viewName); 6: var documentView = ViewEngines.Engines.FindPartialView(controllerContext, "Chrome"); 7: var chromeView = new ChromeView(bodyView, documentView); 8: return new ViewEngineResult(chromeView, this); 9: } 10:   11: // Just execute normally without applying Chromed View Engine 12: return base.FindView(controllerContext, viewName, masterName, useCache); 13: } If the view engine finds the [Chromed] attribute, it will invoke it’s own process – otherwise, it’ll just defer to the normal web forms view engine (with masterpages). The ChromeView’s primary job is to independently set the BodyContent on the view context so that it can be rendered at the appropriate place: 1: public class ChromeView : IView 2: { 3: private ViewEngineResult bodyView; 4: private ViewEngineResult documentView; 5:   6: public ChromeView(ViewEngineResult bodyView, ViewEngineResult documentView) 7: { 8: this.bodyView = bodyView; 9: this.documentView = documentView; 10: } 11:   12: public void Render(ViewContext viewContext, System.IO.TextWriter writer) 13: { 14: ChromeViewContext chromeViewContext = ChromeViewContext.From(viewContext); 15:   16: // First render the Body view to the BodyContent 17: using (var bodyViewWriter = new StringWriter()) 18: { 19: var bodyViewContext = new ViewContext(viewContext, bodyView.View, viewContext.ViewData, viewContext.TempData, bodyViewWriter); 20: this.bodyView.View.Render(bodyViewContext, bodyViewWriter); 21: chromeViewContext.BodyContent = bodyViewWriter.ToString(); 22: } 23: // Now render the Document view 24: this.documentView.View.Render(viewContext, writer); 25: } 26: } The ChromeViewContext (code excluded here) mainly just has a string property for the “BodyContent” – but it also makes sure to put itself in the HttpContext so it’s available. Finally, we created a little extension method so the module’s view can be rendered in the appropriate place: 1: public static void RenderBody(this HtmlHelper htmlHelper) 2: { 3: ChromeViewContext chromeViewContext = ChromeViewContext.From(htmlHelper.ViewContext); 4: htmlHelper.ViewContext.Writer.Write(chromeViewContext.BodyContent); 5: } At this point, the other thing left is to decide how we want to implement the Chrome.aspx page. One approach is the copy/paste the HTML from the typical Site.Master and change the main content placeholder to use the HTML helper above – this way, there are no MasterPages anywhere. Alternatively, we could even have Chrome.aspx utilize the MasterPage if we wanted (e.g., in the case where some pages are Chromed and some pages want to use traditional MasterPage): 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %> 2: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 3: <% Html.RenderBody(); %> 4: </asp:Content> At this point, it’s all academic. I can create a controller like this: 1: [Chromed] 2: public class WidgetController : Controller 3: { 4: public ActionResult Index() 5: { 6: return View(); 7: } 8: } Then I’ll just create Index.ascx (a partial view) and put in the text “Inside my widget”. Now when I run the app, I can request the full route (notice the controller name of “widget” in the address bar below) and the HTML from my Index.ascx will just appear where it is supposed to.   This means no more warnings for missing MasterPages and no more need for your module to have knowledge of the host’s MasterPage placeholders. You have the option of using the Chrome.aspx in the host or providing your own while embedding it as an embedded resource itself. I’m curious to know what people think of this approach. The code above was done with my own local copy of MvcContrib so it’s not currently something you can download. At this point, these are just my initial thoughts – just incorporating some ideas for Orchard into non-Orchard apps to enable building modular/composite apps more easily. Additionally, on the flip side, I still believe that Portable Areas have potential as the module packaging story for Orchard itself.   What do you think?

    Read the article

  • Moving Character in C# XNA Not working

    - by Matthew Stenquist
    I'm having trouble trying to get my character to move for a game I'm making in my sparetime for the Xbox. However, I can't seem to figure out what I'm doing wrong , and I'm not even sure if I'm doing it right. I've tried googling tutorials on this but I haven't found any helpful ones. Mainly, ones on 3d rotation on the XNA creators club website. My question is : How can I get the character to walk towards the right in the MoveInput() function? What am I doing wrong? Did I code it wrong? The problem is : The player isn't moving. I think the MoveInput() class isn't working. Here's my code from my character class : using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; namespace Jumping { class Character { Texture2D texture; Vector2 position; Vector2 velocity; int velocityXspeed = 2; bool jumping; public Character(Texture2D newTexture, Vector2 newPosition) { texture = newTexture; position = newPosition; jumping = true; } public void Update(GameTime gameTime) { JumpInput(); MoveInput(); } private void MoveInput() { //Move Character right GamePadState gamePad1 = GamePad.GetState(PlayerIndex.One); velocity.X = velocity.X + (velocityXspeed * gamePad1.ThumbSticks.Right.X); } private void JumpInput() { position += velocity; if (GamePad.GetState(PlayerIndex.One).Buttons.A == ButtonState.Pressed && jumping == false) { position.Y -= 1f; velocity.Y = -5f; jumping = true; } if (jumping == true) { float i = 1.6f; velocity.Y += 0.15f * i; } if (position.Y + texture.Height >= 1000) jumping = false; if (jumping == false) velocity.Y = 0f; } public void Draw(SpriteBatch spriteBatch) { spriteBatch.Draw(texture, position, Color.White); } } }

    Read the article

  • Creating an ASP.NET Dynamic Web Page Using MS SQL Server 2008 Database (GridView Display)

    Dynamic pages pages that pull insert update and delete data or content from a database are extremely useful in modern websites. They provide a high level of user interactivity that improves user experience. This article will show you how to create such pages in ASP.NET that use a Microsoft SQL Server 2 8 database.... GoGrid Cloud Center Connect Cloud and Dedicated Servers on Your Private Data Center

    Read the article

  • StrongVPN on Ubuntu: Simple VPN Solution That Works

    <b>Linx Pro Magazine:</b> "Ask any knowledgeable mobile user, and she will tell you that the best way to securely access the Internet in public places is through a VPN (virtual private network) connection. So if you enjoy sipping coffee at a local cafe while checking email and browsing the Web, a secure VPN connection is a good solution..."

    Read the article

  • Recipient address rejected: User unknown in local recipient table;

    - by Thufir
    I've gone through the guide for mailman with some difficulty, but seem to be nearly there. I'm able to navigate to the mailman web GUI, create lists and subscribe. I just subscribe my local FQDN, so [email protected] for testing purposes. This FQDN only works on localhost. However, e-mails to the list address, in this case [email protected], are rejected: root@dur:~# root@dur:~# tail /var/log/mail.log Aug 28 08:28:43 dur postfix/master[12208]: terminating on signal 15 Aug 28 08:28:44 dur postfix/postfix-script[12322]: starting the Postfix mail system Aug 28 08:28:44 dur postfix/master[12323]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 08:28:46 dur postfix/postfix-script[12332]: stopping the Postfix mail system Aug 28 08:28:46 dur postfix/master[12323]: terminating on signal 15 Aug 28 08:28:47 dur postfix/postfix-script[12437]: starting the Postfix mail system Aug 28 08:28:47 dur postfix/master[12438]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 08:29:29 dur postfix/smtpd[12460]: connect from localhost[127.0.0.1] Aug 28 08:29:30 dur postfix/smtpd[12460]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 550 5.1.1 <[email protected]>: Recipient address rejected: User unknown in local recipient table; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur.bounceme.net> Aug 28 08:29:33 dur postfix/smtpd[12460]: disconnect from localhost[127.0.0.1] root@dur:~# root@dur:~# ll /var/lib/mailman/data/ total 56 drwxrwsr-x 2 root list 4096 Aug 28 08:28 ./ drwxrwsr-x 8 root list 4096 Aug 27 19:58 ../ -rw-r--r-- 1 root list 0 Aug 28 04:36 aliases -rw-r--r-- 1 root list 12288 Aug 28 04:36 aliases.db -rw-r--r-- 1 root list 12288 Aug 28 08:28 aliases.db.db -rw-r----- 1 root list 41 Aug 27 21:04 creator.pw -rw-rw-r-- 1 root list 10 Aug 27 19:58 last_mailman_version -rw-r--r-- 1 root list 14100 Oct 19 2011 sitelist.cfg root@dur:~# root@dur:~# grep alias /etc/postfix/main.cf alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases alias_database = hash:/var/lib/mailman/data/aliases.db #alias_database = hash:/etc/aliases root@dur:~# root@dur:~# postconf -n alias_database = hash:/var/lib/mailman/data/aliases.db alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = $myhostname localhost.$mydomain localhost $mydomain myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.example.com relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# Why is this e-mail rejected? It seems to, maybe be related to the alias_maps and alias_database settings in postfix.

    Read the article

  • What is the diffference between "data hiding" and "encapsulation"?

    - by john smith optional
    I'm reading "Java concurrency in practice" and there is said: "Fortunately, the same object-oriented techniques that help you write well-organized, maintainable classes - such as encapsulation and data hiding -can also help you crate thread-safe classes." The problem #1 - I never heard about data hiding and don't know what it is. The problem #2 - I always thought that encapsulation is using private vs public, and is actually the data hiding. Can you please explain what data hiding is and how it differs from encapsulation?

    Read the article

  • White screen on webcam site using Flash Player

    - by glenn
    I browsed a webcam site with flash player 10 and it works perfectly. But there is a private chat section and when I enter it I see a big white rectangle where the webcam and chat box should be. What is the problem - it is definitely the flash player, as the page does not prompt me to download the latest flash player... The site was working fine until last week. Does anybody know how to Re-install flash player?

    Read the article

  • CAPTCHA blocking for my scraping script?

    - by Surabhil Sergy
    I am working on a scraping project which involves getting web data and parsing them for further use. I have been working using PHP and CURL to make scraping scripts which crawls web data and I make use of either PHP Dom or Simple HTML DOM Parser library for these kinds of projects. On a recent project I encountered some challenges; initially I found the target website blocked my server IP such that the server could not make any successful requests to the site. Understanding these issues as common I bought a set of private proxies and tried to make request calls using them. Though this could get successful response, I noticed the script is getting some kind of blocks after 2-3 continuous requests. On printing and checking the response I could see a pop-up asking for CAPTCHA validation. I could not see any captcha characters to be entered and it also shows an error “input error: invalid referrer”. On examining the source I could see some Google recaptcha scripts within. I’m stuck at this point and I m not able to execute my script. My script is used for gathering data and it needs to go through a large number of pages periodically over the site. But in the current scenario I am not able to proceed with my script. I could see there are some options to overcome these captcha issues and scraping these kinds of sites too are common. I have been checking my script performance and responses over last two months. I could see during first month I was able to execute very large number of requests from a single IP and I was able to get results. Later I get an IP block and used private proxies which could get me some results. Later I am facing now with the captcha trouble. I would appreciate any help or suggestions in this regard. (Often in this kind of questions I used to get a first comment as, ‘Have you asked for prior permission from the target?’ .I haven’t ,but I know there are many sites doing so to get the details out of sites and target sites may not often give access to them. I respect the legality and scraping etiquettes but I would like to know at what point I stuck and how could I overcome that! ) I could provide any supporting information if needed.

    Read the article

  • ArchBeat Link-o-Rama for November 8, 2012

    - by Bob Rhubart
    Webcast: Meeting Customer Expectations in the New Age of Retail Keep your eye on this live webcast as Sanjeev Sharma (Principal Product Director, Oracle Exalogic), Kelly Goetsch (Senior Principal Product Manager, Oracle Commerce), and Dan Conway (Senior Product Manager, Oracle Retail) offer real-world examples of business value derived by running customer-facing applications on Oracle Engineered Systems. Live, Thursday Nov 8, 10am PT/ 1pm ET. Solving Big Problems in Our 21st Century Information Society | Irving Wladawsky-Berger "I believe that the kind of extensive collaboration between the private sector, academia and government represented by the Internet revolution will be the way we will generally tackle big problems in the 21st century. Just as with the Internet, governments have a major role to play as the catalyst for many of the big projects that the private sector will then take forward and exploit. The need for high bandwidth, robust national broadband infrastructures is but one such example." — Irving Wladawsky-Berger SOA Still Not Dead: Ratification of Governance Standard Highlights SOA’s Continued Relevance So just about the time I dig into Google Trends to learn that the conversation about governance peaked in 2004, along comes all this InfoQ article by Richard Seroter. And of course you've already listened to the OTN Archbeat Podcast about governance, right? Right? Implications of Java 6 End of Public Updates for Oracle E-Business Suite Users | Steven Chan The short version is: "Nothing will change for EBS users after February 2013." According to Steven Chan, "EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6." You'll find additional information on Steven's blog. ADF Mobile Custom Javascript – iFrame Injection | John Brunswick The ADF Mobile Framework provides a range of out of the box components to add within your AMX pages, according to John Brunswick. But what happens when "an out of the box component does not directly fulfill your development need? What options are available to extend your application interface?" John has an answer. How Data and BPM are married to get the right information to the right people at the right time | Leon Smiers "Business Process Management…supports a large group of stakeholders within an organization, all with different needs," says Oracle ACE Leon Smiers. "End-to-end processes typically run across departments, stakeholders and applications, and can often have a long life-span. So how do organizations provide all stakeholders with the information they need?" Leon provides answers in this post. Thought for the Day "(When) asking skilled architects…what they do when confronted with highly complex problems…(they) would most likely answer, 'Just use Common Sense.' (A) better expression than 'common sense' is 'contextual sense' — a knowledge of what is reasonable within a given content. Practicing architects through eduction, experience and examples accumulate a considerable body of contextual sense by the time they're entrusted with solving a system-level problem…" — Eberhardt Rechtin (January 16, 1926 – April 14, 2006) Source: SoftwareQuotes.com

    Read the article

  • How to exclude copy local referenced assemblies from a VSIX

    - by Daniel Cazzulino
    When you add library references to project that are not reference assemblies or installed in the GAC, Visual Studio defaults to setting Copy Local to True: If, however, those dependencies are distributed by some other means (i.e. another extension, or are part of VS private assemblies, or whatever) and you want to avoid including them in your VSIX, you can add the following property to the project file: &lt;PropertyGroup&gt; ... &lt;IncludeCopyLocalReferencesInVSIXContainer&gt;false&lt;/IncludeCopyLocalReferencesInVSIXContainer&gt;Read full article

    Read the article

  • Oracle's Cloud Strategie nach der OOW 2012

    - by Manuel Hossfeld
    Auf der diesjährigen Oracle Open World war „die Cloud“ nicht nur ein vielbenutztes Buzzword, sondern auch Anlass für einige interessante Ankündigungen. Wer keine Zeit oder Muße hatte, sich die entsprechenden Keynotes von Larry Ellison und Thomas Kurian anzuhören, erfährt in diesem Artikel die wesentlichen Änderungen. Die erste Neuerung: Oracle wird in Zukunft alle drei „Sorten“ bzw. „Ebenen“ von Cloud Computing anbieten: SaaS (Software as a Service) – die Bereitstellung von kompletten Fachanwendungen z.B. aus der eBusiness Suite in Form eines Mietmodells - gab es schon länger. Abgesehen von der Tatsache, dass hier zusätzliche/neuere Komponenten und Module der durch die letzten Zukäufe von Oracle noch breiter gewordenen Palette angeboten werden, ändert sich am Prinzip nichts. Bei PaaS (Plattform as a Service) sind vor allem die beiden bereits letztes Jahr angekündigten Dienste „Database Service“ (basierend auf APEX) und „Java Service“ (basierend auf Weblogic) zu nennen, für die nun auch konkrete Pakete und Preise (ca.175$ bis 2000$/Monat) sowie die Möglichkeit zur Anmeldung auf http://cloud.oracle.com vorliegen. Interessanterweise gehört auch ein sog. „Social Service“ in diese Schicht, mit der Oracle Kunden ihre Anwendungen in Zukunft auf standardisierte Weise durch Social Networking Funktionalität wie z.B. Microblogging erweitern können.Ebenso neu angekündigt wurde ein "Developer Service", welcher z.B. Sourcecode-Verwaltung durch GIT Repositories sowie Wikis und Issue Tracking bereit stellen soll. Die dort mittels JDeveloper, Netbeans oder Eclipse erstellten Applikationen können dann nahtlos innerhalb kürzester Zeit in den Java Service deployed werden. Komplett neu und für einige sicher überraschend ist hingegen der Bereich IaaS (Infrastructure as a Service) – Hier geht es um die Bereitstellung von Basis-Infrastrukturkomponenten wie Storage, Rechenleistung (letztlich also Betriebssysteme / VMs) und Messaging / Queueing. Genaue Details oder Preise zu den IaaS Angeboten sind noch nicht bekannt, aber zumindest zu den Storage- und Messaging Services können grundlegende Daten bereits auf http://cloud.oracle.com eingesehen werden Die zweite Neuerung: Kunden können in Zukunft als Alternative zum Betrieb der o.g. „Oracle Cloud“, diese auch komplett hinter ihrer eigenen Firewall aufbauen lassen. Mit anderen Worten: Oracle baut und betreibt bei diesem als „Oracle Private Cloud“ bezeichneten Angebot alle Komponenten selbst – die Daten verlassen aber niemals das Gebäude des Kunden. Letzteres ist gerade bei uns im Datenschutz-sensiblen Deutschland ein wichtiger Aspekt. Da die verwendeten Komponenten in beiden Fällen die gleichen sind, ist auch ein „Umziehen“ oder Erweitern der Private Cloud in die Public Cloud (oder zurück) ohne Änderungen an den Anwendungen möglich. Der Möglichkeit einer "Hybrid Cloud", bei der Teile einer Anwendung hinter der eigenen Firewall, andere Teile aber in der Oracle Cloud laufen, wird damit Realität.

    Read the article

  • What Would a CyberWar Do To Your Business?

    - by [email protected]
    In mid-February the Bipartisan Policy Center in the United States hosted Cyber ShockWave, a simulation of how the country might respond to a catastrophic cyber event. An attack takes place, they can't isolate where it came from or who did it, simulated press reports and market impacts...and the participants in the exercise have to brief the President and advise him/her on what to do. Last week, Former Department of Homeland Security Secretary Michael Chertoff who participated in the exercise summarized his findings in Federal Computer Weekly. The article, given FCW's readership and the topic is obviously focused on the public sector and US Federal policies. However, it touches on some broader issues that impact the private sector as well--which are applicable to any government and country/region-- such as: · How would the US (or any) government collaborate to identify and defeat such an attack? Chertoff calls this out as a current gap. How do the public and private sector collaborate today? How would the massive and disparate collection of agencies and companies act together in a crunch? · What would the impact on industries and global economies be? Chertoff, and a companion article in Government Computer News, only touch briefly on the subject--focusing on the impact on capital markets. "There's no question this has a disastrous impact on the economy," said Stephen Friedman, former director of the National Economic Council under President George W. Bush who played the role of treasury secretary. "You have financial markets shut down at this point, ordinary transactions are dramatically depleted, there's no question that this has a major impact on consumer confidence." That Got Me Thinking · How would it impact Oracle's customers? I know they have business continuity plans--is this one of their scenarios? What if it's not? How would it impact manufacturing lines, ATM networks, customer call centers... · How would it impact me and the companies I rely on? The supermarket down the street, my Internet Service Provider, the service station where I bought gas last night. I sure don't have any answers, and neither do Chertoff or the participants in the exercise. "I have to tell you that ... we are operating in a bit of unchartered territory." said Jamie Gorelick, a former deputy attorney general who played the role of attorney general in the exercise. But it is a good thing that governments and businesses are considering this scenario and doing what they can to prevent it from happening.

    Read the article

  • Recommended books on C++

    - by Mr Teeth
    Hi, I'm looking for a book that contains a CDRom with a IDE for readers to install and use as a environment to learn C++ on. Like the "Objects First With Java - A Practical Introduction Using BlueJ" books, where Java is learnt on BlueJ. Is there a book like this teaching C++? If there isn't any books like this, i'll still appericiate a recommended book for a novice to learn C++ on. I know nothing about C++ and I want to learn during my private times.

    Read the article

  • Creating a Document Library with Content Type in code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/15/154360.aspxIn the past, I have shown how to create a list content type and add the content type to a list in code.  As a Developer, many of the artifacts which we create are widgets which have a List or Document Library as the back end.   We need to be able to create our applications (Web Part, etc.) without having the user involved except to enter the list item data.  Today, I will show you how to do the same with a document library.    A summary of what we will do is as follows:   1.   Create an Empty SharePoint Project in Visual Studio 2.   Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution 3.   Create a new Feature and add and event receiver  all the code will be in the event receiver 4.   Add the fields which will extend the built-in Document content type 5.   If the Content Type does not exist, Create it 6.   If the Document Library does not exist, Create it with the new Content Type inherited from the Document Content Type 7.   Delete the Document Content Type from the Library (as we have a new one which inherited from it) 8.   Add the fields which we want to be visible from the fields added to the new Content Type   Here we go:   Create an Empty SharePoint Project in Visual Studio      Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution       The Utilities and Extensions Library will be part of this project which I will provide a download link at the end of this post.  Drag and drop them into your project.  If Dragged and Dropped from windows explorer you will need to show all files and then include them in your project.  Change the Namespace to agree with your project.   Create a new Feature and add and event receiver  all the code will be in the event receiver.  Here We added a new Feature called “CreateDocLib”  and then right click to add an Event Receiver All of our code will be in this Event Receiver.  For this Demo I will only be using the Feature Activated Event.      From this point on we will be looking at code!    We are adding two constants for use columGroup (How we want SharePoint to Group them, usually Company Name) and ctName(ContentType Name)  using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; namespace CreateDocLib.Features.CreateDocLib { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("56e6897c-97c4-41ac-bc5b-5cd2c04f2dd1")] public class CreateDocLibEventReceiver : SPFeatureReceiver { const string columnGroup = "DJ"; const string ctName = "DJDocLib"; } }     Here we are creating the Feature Activated event.   Adding the new fields (Site Columns) ,  Testing if the Content Type Exists, if not adding it.  Testing if the document Library exists, if not adding it.   #region DocLib public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb spWeb = properties.GetWeb() as SPWeb) { //add the fields addFields(spWeb); //add content type SPContentType testCT = spWeb.ContentTypes[ctName]; // we will not create the content type if it exists if (testCT == null) { //the content type does not exist add it addContentType(spWeb, ctName); } if ((spWeb.Lists.TryGetList("MyDocuments") == null)) { //create the list if it dosen't to exist CreateDocLib(spWeb); } } } #endregion The addFields method uses the utilities library to add site columns to the site. We can add as many fields within this method as we like. Here we are adding one for demonstration purposes. Icon as a Url type.  public void addFields(SPWeb spWeb) { Utilities.addField(spWeb, "Icon", SPFieldType.URL, false, columnGroup); }The addContentType method add the new Content Type to the site Content Types. We have already checked to see that it does not exist. In addition, here is where we add the linkages from our site columns previously created to our new Content Type   private static void addContentType(SPWeb spWeb, string name) { SPContentType myContentType = new SPContentType(spWeb.ContentTypes["Document"], spWeb.ContentTypes, name) { Group = columnGroup }; spWeb.ContentTypes.Add(myContentType); addContentTypeLinkages(spWeb, myContentType); myContentType.Update(); } Here we are adding just one linkage as we only have one additional field in our Content Type public static void addContentTypeLinkages(SPWeb spWeb, SPContentType ct) { Utilities.addContentTypeLink(spWeb, "Icon", ct); } Next we add the logic to create our new Document Library, which we have already checked to see if it exists.  We create the document library and turn on content types.  Add the new content type and then delete the old “Document” content types.   private void CreateDocLib(SPWeb web) { using (var site = new SPSite(web.Url)) { var web1 = site.RootWeb; var listId = web1.Lists.Add("MyDocuments", string.Empty, SPListTemplateType.DocumentLibrary); var lib = web1.Lists[listId] as SPDocumentLibrary; lib.ContentTypesEnabled = true; var docType = web.ContentTypes[ctName]; lib.ContentTypes.Add(docType); lib.ContentTypes.Delete(lib.ContentTypes["Document"].Id); lib.Update(); AddLibrarySettings(web1, lib); } }  Finally, we set some document library settings on our new document library with the AddLibrarySettings method. We then ensure that the new site column is visible when viewed in the browser.  private void AddLibrarySettings(SPWeb web, SPDocumentLibrary lib) { lib.OnQuickLaunch = true; lib.ForceCheckout = true; lib.EnableVersioning = true; lib.MajorVersionLimit = 5; lib.EnableMinorVersions = true; lib.MajorWithMinorVersionsLimit = 5; lib.Update(); var view = lib.DefaultView; view.ViewFields.Add("Icon"); view.Update(); } Okay, what's cool here: In a few lines of code, we have created site columns, A content Type, a document library. As a developer, I use this functionality all the time. For instance, I could now just add a web part to this same solutionwhich uses this document Library. I love SharePoint! Here is the complete solution: Create Document Library Code

    Read the article

  • what are the advantages and disadvantages of putting code for an unfinished project on github

    - by cori
    I'm stating to work on a project that I intend to release as open source via the githubs. What are the advantages of putting the code on github from the outset, as opposed to waiting until the project is in a working state before publishing. If it matters, this particular project is a C# app/service, and I have only a free github account (so I can't make it private and then pull back the covers later)

    Read the article

  • Webcast: John Fowler Reveals The Next Step In Data Center Consolidation – June 27 At 10 AM PT

    - by Roxana Babiciu
    Completely integrated solutions are just better. But don't take our word for it - encourage your customers and prospects to join this live webcast featuring Oracle EVP John Fowler to find out why. Participants will learn how consolidating their existing data center to this new generation of solutions will simplify architectures, jump start application deployment and improve system performance - with easy self-service and private cloud capabilities.

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >