Search Results

Search found 70751 results on 2831 pages for 'javax net ssl sslpeerunverifiedexception gri control aix'.

Page 31/2831 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Weblogs.asp.net has a problem, it is spam

    - by Chris Hammond
    Is anyone at Microsoft listening to the SPAM problem here on Weblogs.asp.net? My “ Can anyone do anything about the spam here on weblogs.asp.net? ” post from October got over 12 spam comments posted to it in the past 24 hours. I have comments all moderated, but that just means I have a crapload of work to do each time people comment. Also, when you click on a link from a comment notification email you are taken to an insecure site warning due to an invalid SSL Cert. We really just need some updates...(read more)

    Read the article

  • Pluralsight Meet the Author Podcast on Building ASP.NET MVC Applications with HTML5 and jQuery

    - by dwahlin
    In the latest installment of Pluralsight’s Meet the Author podcast series, Fritz Onion and I talk about my new course, Building ASP.NET MVC Apps with Entity Framework Code First, HTML5, and jQuery.  In the interview I describe how the course provides a complete end-to-end view of building an application using multiple technologies.  I go into some detail about how the data access layer was built as well as how the UI works. Listen to it below:   Meet the Author:  Dan Wahlin on Building ASP.NET MVC Apps with Entity Framework Code First, HTML5, and jQuery

    Read the article

  • Will bing bot index pages with invalid SSL certificates?

    - by Martin
    Bingbot and Yahoo slurp do not support SNI(Server Name Indication when using SSL). Ignoring other workarounds (multi domain certificates, non-SSL content etc.), will Bingbot index pages that have an invalid SSL certificate, eg. issued for example.net, but used on example.com? If possible please provide an example from Yahoo or Bing. I have found websites in bing, that use self signed certificates and are indexed correctly, but what about invalid certificates?

    Read the article

  • Is there a modern tutorial for setting up SSL on apache2?

    - by John Baber
    I've been running apache2 for ages on my ubuntu server without SSL. Now that I want to have some directories delivered by SSL, I can't find any straightforward tutorials that were written recently. The best I've found is http://vanemery.com/Linux/Apache/apache-SSL.html but it tells me to put stuff in /etc/httpd/conf I don't want to guess that that should translate to /etc/apache2/conf because guessing based on old tutorials has ruined my web serving before.

    Read the article

  • Trying to test for OS and space in filesystem on AIX

    - by Buzkie
    I need to check if I Filesystem exists, and if it does exist there is 300 MB of space in it. What I have so far: if [ "$(df -m /opt/IBM | grep -vE '^Filesystem' | awk '{print ($3)}')" < "300" ] then echo "not enough space in the target filesystem" exit 1 fi This throws an error. I don't really know what I'm doing in shell. Please help. -Alex

    Read the article

  • Why wouldn't I be able to establish a trust relationship for a SSL/TLS channel?

    - by Abe Miessler
    I have a piece of .NET code that is erroring out when it makes a call to HTTPWebRequest.GetRequestStream. Here is the error message: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. I've read a few things that suggest that I might need a certificate on the machine running the code, but i'm not sure if that's true or how to do it. If I need to get a certificate, how do I do it? Code: var request = (HttpWebRequest)HttpWebRequest.Create(requestUrl); //my url request.Method = StringUtilities.ConvertToString(httpMethod); // Set the http method GET, POST, etc. if (postData != null) { request.ContentLength = postData.Length; request.ContentType = contentType; using (var dataStream = request.GetRequestStream()) { dataStream.Write(postData, 0, postData.Length); } }

    Read the article

  • Routing Business Branches: Granular access control in ASP.NET MVC

    - by FreshCode
    How should ASP.NET MVC routes be structured to allow granular role-based access control to business branches? Every business entity is related to a branch, either by itself or via its parent entities. Is there an elegant way to authorize actions based on user-roles for any number of branches? 1. {branch} in route? {branch}/{controller}/{action}/{id} Action: [Authorize(Roles="Technician")] public ActionResult BusinessWidgetAction(BusinessObject obj) { // Authorize will test if User has Technician role in branch context // ... } 2. Retrieve branch from business entity? {controller}/{action}/{id} Action: public ActionResult BusinessWidgetAction(BusinessObject obj) { if (!User.HasAccessTo("WidgetAction", obj.Branch)) throw new HttpException(403, "No soup for you!"); // or redirect // ... } 3. Or is there a better way?

    Read the article

  • Announcing the release of the Windows Azure SDK 2.1 for .NET

    - by ScottGu
    Today we released the v2.1 update of the Windows Azure SDK for .NET.  This is a major refresh of the Windows Azure SDK and it includes some great new features and enhancements. These new capabilities include: Visual Studio 2013 Preview Support: The Windows Azure SDK now supports using the new VS 2013 Preview Visual Studio 2013 VM Image: Windows Azure now has a built-in VM image that you can use to host and develop with VS 2013 in the cloud Visual Studio Server Explorer Enhancements: Redesigned with improved filtering and auto-loading of subscription resources Virtual Machines: Start and Stop VM’s w/suspend billing directly from within Visual Studio Cloud Services: New Emulator Express option with reduced footprint and Run as Normal User support Service Bus: New high availability options, Notification Hub support, Improved VS tooling PowerShell Automation: Lots of new PowerShell commands for automating Web Sites, Cloud Services, VMs and more All of these SDK enhancements are now available to start using immediately and you can download the SDK from the Windows Azure .NET Developer Center.  Visual Studio’s Team Foundation Service (http://tfs.visualstudio.com/) has also been updated to support today’s SDK 2.1 release, and the SDK 2.1 features can now be used with it (including with automated builds + tests). Below are more details on the new features and capabilities released today: Visual Studio 2013 Preview Support Today’s Window Azure SDK 2.1 release adds support for the recent Visual Studio 2013 Preview. The 2.1 SDK also works with Visual Studio 2010 and Visual Studio 2012, and works side by side with the previous Windows Azure SDK 1.8 and 2.0 releases. To install the Windows Azure SDK 2.1 on your local computer, choose the “install the sdk” link from the Windows Azure .NET Developer Center. Then, chose which version of Visual Studio you want to use it with.  Clicking the third link will install the SDK with the latest VS 2013 Preview: If you don’t already have the Visual Studio 2013 Preview installed on your machine, this will also install Visual Studio Express 2013 Preview for Web. Visual Studio 2013 VM Image Hosted in the Cloud One of the requests we’ve heard from several customers has been to have the ability to host Visual Studio within the cloud (avoiding the need to install anything locally on your computer). With today’s SDK update we’ve added a new VM image to the Windows Azure VM Gallery that has Visual Studio Ultimate 2013 Preview, SharePoint 2013, SQL Server 2012 Express and the Windows Azure 2.1 SDK already installed on it.  This provides a really easy way to create a development environment in the cloud with the latest tools. With the recent shutdown and suspend billing feature we shipped on Windows Azure last month, you can spin up the image only when you want to do active development, and then shut down the virtual machine and not have to worry about usage charges while the virtual machine is not in use. You can create your own VS image in the cloud by using the New->Compute->Virtual Machine->From Gallery menu within the Windows Azure Management Portal, and then by selecting the “Visual Studio Ultimate 2013 Preview” template: Visual Studio Server Explorer: Improved Filtering/Management of Subscription Resources With the Windows Azure SDK 2.1 release you’ll notice significant improvements in the Visual Studio Server Explorer. The explorer has been redesigned so that all Windows Azure services are now contained under a single Windows Azure node.  From the top level node you can now manage your Windows Azure credentials, import a subscription file or filter Server Explorer to only show services from particular subscriptions or regions. Note: The Web Sites and Mobile Services nodes will appear outside the Windows Azure Node until the final release of VS 2013. If you have installed the ASP.NET and Web Tools Preview Refresh, though, the Web Sites node will appear inside the Windows Azure node even with the VS 2013 Preview. Once your subscription information is added, Windows Azure services from all your subscriptions are automatically enumerated in the Server Explorer. You no longer need to manually add services to Server Explorer individually. This provides a convenient way of viewing all of your cloud services, storage accounts, service bus namespaces, virtual machines, and web sites from one location: Subscription and Region Filtering Support Using the Windows Azure node in Server Explorer, you can also now filter your Windows Azure services in the Server Explorer by the subscription or region they are in.  If you have multiple subscriptions but need to focus your attention to just a few subscription for some period of time, this a handy way to hide the services from other subscriptions view until they become relevant. You can do the same sort of filtering by region. To enable this, just select “Filter Services” from the context menu on the Windows Azure node: Then choose the subscriptions and/or regions you want to filter by. In the below example, I’ve decided to show services from my pay-as-you-go subscription within the East US region: Visual Studio will then automatically filter the items that show up in the Server Explorer appropriately: With storage accounts and service bus namespaces, you sometimes need to work with services outside your subscription. To accommodate that scenario, those services allow you to attach an external account (from the context menu). You’ll notice that external accounts have a slightly different icon in server explorer to indicate they are from outside your subscription. Other Improvements We’ve also improved the Server Explorer by adding additional properties and actions to the service exposed. You now have access to most of the properties on a cloud service, deployment slot, role or role instance as well as the properties on storage accounts, virtual machines and web sites. Just select the object of interest in Server Explorer and view the properties in the property pane. We also now have full support for creating/deleting/update storage tables, blobs and queues from directly within Server Explorer.  Simply right-click on the appropriate storage account node and you can create them directly within Visual Studio: Virtual Machines: Start/Stop within Visual Studio Virtual Machines now have context menu actions that allow you start, shutdown, restart and delete a Virtual Machine directly within the Visual Studio Server Explorer. The shutdown action enables you to shut down the virtual machine and suspend billing when the VM is not is use, and easily restart it when you need it: This is especially useful in Dev/Test scenarios where you can start a VM – such as a SQL Server – during your development session and then shut it down / suspend billing when you are not developing (and no longer be billed for it). You can also now directly remote desktop into VMs using the “Connect using Remote Desktop” context menu command in VS Server Explorer.  Cloud Services: Emulator Express with Run as Normal User Support You can now launch Visual Studio and run your cloud services locally as a Normal User (without having to elevate to an administrator account) using a new Emulator Express option included as a preview feature with this SDK release.  Emulator Express is a version of the Windows Azure Compute Emulator that runs a restricted mode – one instance per role – and it doesn’t require administrative permissions and uses 40% less resources than the full Windows Azure Emulator. Emulator Express supports both web and worker roles. To run your application locally using the Emulator Express option, simply change the following settings in the Windows Azure project. On the shortcut menu for the Windows Azure project, choose Properties, and then choose the Web tab. Check the setting for IIS (Internet Information Services). Make sure that the option is set to IIS Express, not the full version of IIS. Emulator Express is not compatible with full IIS. On the Web tab, choose the option for Emulator Express. Service Bus: Notification Hubs With the Windows Azure SDK 2.1 release we are adding support for Windows Azure Notification Hubs as part of our official Windows Azure SDK, inside of Microsoft.ServiceBus.dll (previously the Notification Hub functionality was in a preview assembly). You are now able to create, update and delete Notification Hubs programmatically, manage your device registrations, and send push notifications to all your mobile clients across all platforms (Windows Store, Windows Phone 8, iOS, and Android). Learn more about Notification Hubs on MSDN here, or watch the Notification Hubs //BUILD/ presentation here. Service Bus: Paired Namespaces One of the new features included with today’s Windows Azure SDK 2.1 release is support for Service Bus “Paired Namespaces”.  Paired Namespaces enable you to better handle situations where a Service Bus service namespace becomes unavailable (for example: due to connectivity issues or an outage) and you are unable to send or receive messages to the namespace hosting the queue, topic, or subscription. Previously,to handle this scenario you had to manually setup separate namespaces that can act as a backup, then implement manual failover and retry logic which was sometimes tricky to get right. Service Bus now supports Paired Namespaces, which enables you to connect two namespaces together. When you activate the secondary namespace, messages are stored in the secondary queue for delivery to the primary queue at a later time. If the primary container (namespace) becomes unavailable for some reason, automatic failover enables the messages in the secondary queue. For detailed information about paired namespaces and high availability, see the new topic Asynchronous Messaging Patterns and High Availability. Service Bus: Tooling Improvements In this release, the Windows Azure Tools for Visual Studio contain several enhancements and changes to the management of Service Bus messaging entities using Visual Studio’s Server Explorer. The most noticeable change is that the Service Bus node is now integrated into the Windows Azure node, and supports integrated subscription management. Additionally, there has been a change to the code generated by the Windows Azure Worker Role with Service Bus Queue project template. This code now uses an event-driven “message pump” programming model using the QueueClient.OnMessage method. PowerShell: Tons of New Automation Commands Since my last blog post on the previous Windows Azure SDK 2.0 release, we’ve updated Windows Azure PowerShell (which is a separate download) five times. You can find the full change log here. We’ve added new cmdlets in the following areas: China instance and Windows Azure Pack support Environment Configuration VMs Cloud Services Web Sites Storage SQL Azure Service Bus China Instance and Windows Azure Pack We now support the following cmdlets for the China instance and Windows Azure Pack, respectively: China Instance: Web Sites, Service Bus, Storage, Cloud Service, VMs, Network Windows Azure Pack: Web Sites, Service Bus We will have full cmdlet support for these two Windows Azure environments in PowerShell in the near future. Virtual Machines: Stop/Start Virtual Machines Similar to the Start/Stop VM capability in VS Server Explorer, you can now stop your VM and suspend billing: If you want to keep the original behavior of keeping your stopped VM provisioned, you can pass in the -StayProvisioned switch parameter. Virtual Machines: VM endpoint ACLs We’ve added and updated a bunch of cmdlets for you to configure fine-grained network ACL on your VM endpoints. You can use the following cmdlets to create ACL config and apply them to a VM endpoint: New-AzureAclConfig Get-AzureAclConfig Set-AzureAclConfig Remove-AzureAclConfig Add-AzureEndpoint -ACL Set-AzureEndpoint –ACL The following example shows how to add an ACL rule to an existing endpoint of a VM. Other improvements for Virtual Machine management includes Added -NoWinRMEndpoint parameter to New-AzureQuickVM and Add-AzureProvisioningConfig to disable Windows Remote Management Added -DirectServerReturn parameter to Add-AzureEndpoint and Set-AzureEndpoint to enable/disable direct server return Added Set-AzureLoadBalancedEndpoint cmdlet to modify load balanced endpoints Cloud Services: Remote Desktop and Diagnostics Remote Desktop and Diagnostics are popular debugging options for Cloud Services. We’ve introduced cmdlets to help you configure these two Cloud Service extensions from Windows Azure PowerShell. Windows Azure Cloud Services Remote Desktop extension: New-AzureServiceRemoteDesktopExtensionConfig Get-AzureServiceRemoteDesktopExtension Set-AzureServiceRemoteDesktopExtension Remove-AzureServiceRemoteDesktopExtension Windows Azure Cloud Services Diagnostics extension New-AzureServiceDiagnosticsExtensionConfig Get-AzureServiceDiagnosticsExtension Set-AzureServiceDiagnosticsExtension Remove-AzureServiceDiagnosticsExtension The following example shows how to enable Remote Desktop for a Cloud Service. Web Sites: Diagnostics With our last SDK update, we introduced the Get-AzureWebsiteLog –Tail cmdlet to get the log streaming of your Web Sites. Recently, we’ve also added cmdlets to configure Web Site application diagnostics: Enable-AzureWebsiteApplicationDiagnostic Disable-AzureWebsiteApplicationDiagnostic The following 2 examples show how to enable application diagnostics to the file system and a Windows Azure Storage Table: SQL Database Previously, you had to know the SQL Database server admin username and password if you want to manage the database in that SQL Database server. Recently, we’ve made the experience much easier by not requiring the admin credential if the database server is in your subscription. So you can simply specify the -ServerName parameter to tell Windows Azure PowerShell which server you want to use for the following cmdlets. Get-AzureSqlDatabase New-AzureSqlDatabase Remove-AzureSqlDatabase Set-AzureSqlDatabase We’ve also added a -AllowAllAzureServices parameter to New-AzureSqlDatabaseServerFirewallRule so that you can easily add a firewall rule to whitelist all Windows Azure IP addresses. Besides the above experience improvements, we’ve also added cmdlets get the database server quota and set the database service objective. Check out the following cmdlets for details. Get-AzureSqlDatabaseServerQuota Get-AzureSqlDatabaseServiceObjective Set-AzureSqlDatabase –ServiceObjective Storage and Service Bus Other new cmdlets include Storage: CRUD cmdlets for Azure Tables and Queues Service Bus: Cmdlets for managing authorization rules on your Service Bus Namespace, Queue, Topic, Relay and NotificationHub Summary Today’s release includes a bunch of great features that enable you to build even better cloud solutions.  All the above features/enhancements are shipped and available to use immediately as part of the 2.1 release of the Windows Azure SDK for .NET. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • A Visual Studio tool eliminating the need to rewrite for web and mobile

    - by Visual WebGui
    We have already covered the BYOD requirements that an application developer is faced with, in an earlier blog entry ( How to Bring Your Own Device (BYOD) to a .NET application ). In that entry we emphasized the fact that application developers will need to prepare their applications for serving multiple types of devices on multiple platforms, ranging from the smallest mobile devices up to and beyond the largest desktop devices. The experts prediction is that in the near future we will see that the...(read more)

    Read the article

  • LLBLGen Pro feature highlights: automatic element name construction

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) One of the things one might take for granted but which has a huge impact on the time spent in an entity modeling environment is the way the system creates names for elements out of the information provided, in short: automatic element name construction. Element names are created in both directions of modeling: database first and model first and the more names the system can create for you without you having to rename them, the better. LLBLGen Pro has a rich, fine grained system for creating element names out of the meta-data available, which I'll describe more in detail below. First the model element related element naming features are highlighted, in the section Automatic model element naming features and after that I'll go more into detail about the relational model element naming features LLBLGen Pro has to offer in the section Automatic relational model element naming features. Automatic model element naming features When working database first, the element names in the model, e.g. entity names, entity field names and so on, are in general determined from the relational model element (e.g. table, table field) they're mapped on, as the model elements are reverse engineered from these relational model elements. It doesn't take rocket science to automatically name an entity Customer if the entity was created after reverse engineering a table named Customer. It gets a little trickier when the entity which was created by reverse engineering a table called TBL_ORDER_LINES has to be named 'OrderLine' automatically. Automatic model element naming also takes into effect with model first development, where some settings are used to provide you with a default name, e.g. in the case of navigator name creation when you create a new relationship. The features below are available to you in the Project Settings. Open Project Settings on a loaded project and navigate to Conventions -> Element Name Construction. Strippers! The above example 'TBL_ORDER_LINES' shows that some parts of the table name might not be needed for name creation, in this case the 'TBL_' prefix. Some 'brilliant' DBAs even add suffixes to table names, fragments you might not want to appear in the entity names. LLBLGen Pro offers you to define both prefix and suffix fragments to strip off of table, view, stored procedure, parameter, table field and view field names. In the example above, the fragment 'TBL_' is a good candidate for such a strip pattern. You can specify more than one pattern for e.g. the table prefix strip pattern, so even a really messy schema can still be used to produce clean names. Underscores Be Gone Another thing you might get rid of are underscores. After all, most naming schemes for entities and their classes use PasCal casing rules and don't allow for underscores to appear. LLBLGen Pro can automatically strip out underscores for you. It's an optional feature, so if you like the underscores, you're not forced to see them go: LLBLGen Pro will leave them alone when ordered to to so. PasCal everywhere... or not, your call LLBLGen Pro can automatically PasCal case names on word breaks. It determines word breaks in a couple of ways: a space marks a word break, an underscore marks a word break and a case difference marks a word break. It will remove spaces in all cases, and based on the underscore removal setting, keep or remove the underscores, and upper-case the first character of a word break fragment, and lower case the rest. Say, we keep the defaults, which is remove underscores and PasCal case always and strip the TBL_ fragment, we get with our example TBL_ORDER_LINES, after stripping TBL_ from the table name two word fragments: ORDER and LINES. The underscores are removed, the first character of each fragment is upper-cased, the rest lower-cased, so this results in OrderLines. Almost there! Pluralization and Singularization In general entity names are singular, like Customer or OrderLine so LLBLGen Pro offers a way to singularize the names. This will convert OrderLines, the result we got after the PasCal casing functionality, into OrderLine, exactly what we're after. Show me the patterns! There are other situations in which you want more flexibility. Say, you have an entity Customer and an entity Order and there's a foreign key constraint defined from the target of Order and the target of Customer. This foreign key constraint results in a 1:n relationship between the entities Customer and Order. A relationship has navigators mapped onto the relationship in both entities the relationship is between. For this particular relationship we'd like to have Customer as navigator in Order and Orders as navigator in Customer, so the relationship becomes Customer.Orders 1:n Order.Customer. To control the naming of these navigators for the various relationship types, LLBLGen Pro defines a set of patterns which allow you, using macros, to define how the auto-created navigator names will look like. For example, if you rather have Customer.OrderCollection, you can do so, by changing the pattern from {$EndEntityName$P} to {$EndEntityName}Collection. The $P directive makes sure the name is pluralized, which is not what you want if you're going for <EntityName>Collection, hence it's removed. When working model first, it's a given you'll create foreign key fields along the way when you define relationships. For example, you've defined two entities: Customer and Order, and they have their fields setup properly. Now you want to define a relationship between them. This will automatically create a foreign key field in the Order entity, which reflects the value of the PK field in Customer. (No worries if you hate the foreign key fields in your classes, on NHibernate and EF these can be hidden in the generated code if you want to). A specific pattern is available for you to direct LLBLGen Pro how to name this foreign key field. For example, if all your entities have Id as PK field, you might want to have a different name than Id as foreign key field. In our Customer - Order example, you might want to have CustomerId instead as foreign key name in Order. The pattern for foreign key fields gives you that freedom. Abbreviations... make sense of OrdNr and friends I already described word breaks in the PasCal casing paragraph, how they're used for the PasCal casing in the constructed name. Word breaks are used for another neat feature LLBLGen Pro has to offer: abbreviation support. Burt, your friendly DBA in the dungeons below the office has a hate-hate relationship with his keyboard: he can't stand it: typing is something he avoids like the plague. This has resulted in tables and fields which have names which are very short, but also very unreadable. Example: our TBL_ORDER_LINES example has a lovely field called ORD_NR. What you would like to see in your fancy new OrderLine entity mapped onto this table is a field called OrderNumber, not a field called OrdNr. What you also like is to not have to rename that field manually. There are better things to do with your time, after all. LLBLGen Pro has you covered. All it takes is to define some abbreviation - full word pairs and during reverse engineering model elements from tables/views, LLBLGen Pro will take care of the rest. For the ORD_NR field, you need two values: ORD as abbreviation and Order as full word, and NR as abbreviation and Number as full word. LLBLGen Pro will now convert every word fragment found with the word breaks which matches an abbreviation to the given full word. They're case sensitive and can be found in the Project Settings: Navigate to Conventions -> Element Name Construction -> Abbreviations. Automatic relational model element naming features Not everyone works database first: it may very well be the case you start from scratch, or have to add additional tables to an existing database. For these situations, it's key you have the flexibility that you can control the created table names and table fields without any work: let the designer create these names based on the entity model you defined and a set of rules. LLBLGen Pro offers several features in this area, which are described in more detail below. These features are found in Project Settings: navigate to Conventions -> Model First Development. Underscores, welcome back! Not every database is case insensitive, and not every organization requires PasCal cased table/field names, some demand all lower or all uppercase names with underscores at word breaks. Say you create an entity model with an entity called OrderLine. You work with Oracle and your organization requires underscores at word breaks: a table created from OrderLine should be called ORDER_LINE. LLBLGen Pro allows you to do that: with a simple checkbox you can order LLBLGen Pro to insert an underscore at each word break for the type of database you're working with: case sensitive or case insensitive. Checking the checkbox Insert underscore at word break case insensitive dbs will let LLBLGen Pro create a table from the entity called Order_Line. Half-way there, as there are still lower case characters there and you need all caps. No worries, see below Casing directives so everyone can sleep well at night For case sensitive databases and case insensitive databases there is one setting for each of them which controls the casing of the name created from a model element (e.g. a table created from an entity definition using the auto-mapping feature). The settings can have the following values: AsProjectElement, AllUpperCase or AllLowerCase. AsProjectElement is the default, and it keeps the casing as-is. In our example, we need to get all upper case characters, so we select AllUpperCase for the setting for case sensitive databases. This will produce the name ORDER_LINE. Sequence naming after a pattern Some databases support sequences, and using model-first development it's key to have sequences, when needed, to be created automatically and if possible using a name which shows where they're used. Say you have an entity Order and you want to have the PK values be created by the database using a sequence. The database you're using supports sequences (e.g. Oracle) and as you want all numeric PK fields to be sequenced, you have enabled this by the setting Auto assign sequences to integer pks. When you're using LLBLGen Pro's auto-map feature, to create new tables and constraints from the model, it will create a new table, ORDER, based on your settings I previously discussed above, with a PK field ID and it also creates a sequence, SEQ_ORDER, which is auto-assigns to the ID field mapping. The name of the sequence is created by using a pattern, defined in the Model First Development setting Sequence pattern, which uses plain text and macros like with the other patterns previously discussed. Grouping and schemas When you start from scratch, and you're working model first, the tables created by LLBLGen Pro will be in a catalog and / or schema created by LLBLGen Pro as well. If you use LLBLGen Pro's grouping feature, which allows you to group entities and other model elements into groups in the project (described in a future blog post), you might want to have that group name reflected in the schema name the targets of the model elements are in. Say you have a model with a group CRM and a group HRM, both with entities unique for these groups, e.g. Employee in HRM, Customer in CRM. When auto-mapping this model to create tables, you might want to have the table created for Employee in the HRM schema but the table created for Customer in the CRM schema. LLBLGen Pro will do just that when you check the setting Set schema name after group name to true (default). This gives you total control over where what is placed in the database from your model. But I want plural table names... and TBL_ prefixes! For now we follow best practices which suggest singular table names and no prefixes/suffixes for names. Of course that won't keep everyone happy, so we're looking into making it possible to have that in a future version. Conclusion LLBLGen Pro offers a variety of options to let the modeling system do as much work for you as possible. Hopefully you enjoyed this little highlight post and that it has given you new insights in the smaller features available to you in LLBLGen Pro, ones you might not have thought off in the first place. Enjoy!

    Read the article

  • Sharp Architecture 1.9.5 Released

    - by AlecWhittington
    The S#arp Architecture team is proud to announce the release of version 1.9.5. This version has had the following changes: Upgraded to MVC 3 RTM Solution upgraded to .NET 4 Implementation of IDependencyResolver provided, but not implemented This marks the last scheduled release of 1.X for S#arp Architecture . The team is working hard to get the 2.0 release out the door and we hope to have a preview of that coming soon. With regards to IDependencyResolver, we have provided an implementation, but have...(read more)

    Read the article

  • Interesting links week #6

    - by erwin21
    Below a list of interesting links that I found this week: Frontend: Understanding CSS Selectors Javascript: Breaking the Web with hash-bangs HTML5 Peeks, Pokes and Pointers Development: 10 Points to Take Care While Building Links for SEO View State decoder ASP.NET MVC Performance Tips Other: Things to Remember Before Launching a Website Tips and Tricks On How To Become a Presentation Ninja 10 Ways to Simplify Your Workday Interested in more interesting links follow me at twitter http://twitter.com/erwingriekspoor

    Read the article

  • SSL authentication error: RemoteCertificateChainErrors on ASP.NET on Ubuntu

    - by Frank Krueger
    I am trying to access Gmail's SMTP service from an ASP.NET MVC site running under Mono 2.4.2.3. But I keep getting this error: System.InvalidOperationException: SSL authentication error: RemoteCertificateChainErrors at System.Net.Mail.SmtpClient.m__3 (System.Object sender, System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Security.Cryptography.X509Certificates.X509Chain chain, SslPolicyErrors sslPolicyErrors) [0x00000] at System.Net.Security.SslStream+c__AnonStorey9.m__9 (System.Security.Cryptography.X509Certificates.X509Certificate cert, System.Int32[] certErrors) [0x00000] at Mono.Security.Protocol.Tls.SslClientStream.OnRemoteCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] errors) [0x00000] at Mono.Security.Protocol.Tls.SslStreamBase.RaiseRemoteCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] errors) [0x00000] at Mono.Security.Protocol.Tls.SslClientStream.RaiseServerCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] certificateErrors) [0x00000] at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.validateCertificates (Mono.Security.X509.X509CertificateCollection certificates) [0x00000] at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.ProcessAsTls1 () [0x00000] at Mono.Security.Protocol.Tls.Handshake.HandshakeMessage.Process () [0x00000] at (wrapper remoting-invoke-with-check) Mono.Security.Protocol.Tls.Handshake.HandshakeMessage:Process () at Mono.Security.Protocol.Tls.ClientRecordProtocol.ProcessHandshakeMessage (Mono.Security.Protocol.Tls.TlsStream handMsg) [0x00000] at Mono.Security.Protocol.Tls.RecordProtocol.InternalReceiveRecordCallback (IAsyncResult asyncResult) [0x00000] I have installed certificates using: certmgr -ssl -m smtps://smtp.gmail.com:465 with this output: Mono Certificate Manager - version 2.4.2.3 Manage X.509 certificates and CRL from stores. Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed. X.509 Certificate v3 Issued from: C=US, O=Equifax, OU=Equifax Secure Certificate Authority Issued to: C=US, O=Google Inc, CN=Google Internet Authority Valid from: 06/08/2009 20:43:27 Valid until: 06/07/2013 19:43:27 *** WARNING: Certificate signature is INVALID *** Import this certificate into the CA store ?yes X.509 Certificate v3 Issued from: C=US, O=Google Inc, CN=Google Internet Authority Issued to: C=US, S=California, L=Mountain View, O=Google Inc, CN=smtp.gmail.com Valid from: 04/22/2010 20:02:45 Valid until: 04/22/2011 20:12:45 Import this certificate into the AddressBook store ?yes 2 certificates added to the stores. In fact, this worked for a month but mysteriously stopped working on May 5. I installed these new certs today, but I am still getting these errors.

    Read the article

  • Create PDF document using iTextSharp in ASP.Net 4.0 and MemoryMappedFile

    - by sreejukg
    In this article I am going to demonstrate how ASP.Net developers can programmatically create PDF documents using iTextSharp. iTextSharp is a software component, that allows developers to programmatically create or manipulate PDF documents. Also this article discusses the process of creating in-memory file, read/write data from/to the in-memory file utilizing the new feature MemoryMappedFile. I have a database of users, where I need to send a notice to all my users as a PDF document. The sending mail part of it is not covered in this article. The PDF document will contain the company letter head, to make it more official. I have a list of users stored in a database table named “tblusers”. For each user I need to send customized message addressed to them personally. The database structure for the users is give below. id Title Full Name 1 Mr. Sreeju Nair K. G. 2 Dr. Alberto Mathews 3 Prof. Venketachalam Now I am going to generate the pdf document that contains some message to the user, in the following format. Dear <Title> <FullName>, The message for the user. Regards, Administrator Also I have an image, bg.jpg that contains the background for the document generated. I have created .Net 4.0 empty web application project named “iTextSharpSample”. First thing I need to do is to download the iTextSharp dll from the source forge. You can find the url for the download here. http://sourceforge.net/projects/itextsharp/files/ I have extracted the Zip file and added the itextsharp.dll as a reference to my project. Also I have added a web form named default.aspx to my project. After doing all this, the solution explorer have the following view. In the default.aspx page, I inserted one grid view and associated it with a SQL Data source control that bind data from tblusers. I have added a button column in the grid view with text “generate pdf”. The output of the page in the browser is as follows. Now I am going to create a pdf document when the user clicking on the Generate PDF button. As I mentioned before, I am going to work with the file in memory, I am not going to create a file in the disk. I added an event handler for button by specifying onrowcommand event handler. My gridview source looks like <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1" Width="481px" CellPadding="4" ForeColor="#333333" GridLines="None" onrowcommand="Generate_PDF" > ………………………………………………………………………….. ………………………………………………………………………….. </asp:GridView> In the code behind, I wrote the corresponding event handler. protected void Generate_PDF(object sender, GridViewCommandEventArgs e) { // The button click event handler code. // I am going to explain the code for this section in the remaining part of the article } The Generate_PDF method is straight forward, It get the title, fullname and message to some variables, then create the pdf using these variables. The code for getting data from the grid view is as follows // get the row index stored in the CommandArgument property int index = Convert.ToInt32(e.CommandArgument); // get the GridViewRow where the command is raised GridViewRow selectedRow = ((GridView)e.CommandSource).Rows[index]; string title = selectedRow.Cells[1].Text; string fullname = selectedRow.Cells[2].Text; string msg = @"There are some changes in the company policy, due to this matter you need to submit your latest address to us. Please update your contact details / personnal details by visiting the member area of the website. ................................... "; since I don’t want to save the file in the disk, I am going the new feature introduced in .Net framework 4, called Memory-Mapped Files. Using Memory-Mapped mapped file, you can created non-persisted memory mapped files, that are not associated with a file in a disk. So I am going to create a temporary file in memory, add the pdf content to it, then write it to the output stream. To read more about MemoryMappedFile, read this msdn article http://msdn.microsoft.com/en-us/library/dd997372.aspx The below portion of the code using MemoryMappedFile object to create a test pdf document in memory and perform read/write operation on file. The CreateViewStream() object will give you a stream that can be used to read or write data to/from file. The code is very straight forward and I included comment so that you can understand the code. using (MemoryMappedFile mmf = MemoryMappedFile.CreateNew("test1.pdf", 1000000)) { // Create a new pdf document object using the constructor. The parameters passed are document size, left margin, right margin, top margin and bottom margin. iTextSharp.text.Document d = new iTextSharp.text.Document(PageSize.A4, 72,72,172,72); //get an instance of the memory mapped file to stream object so that user can write to this using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { // associate the document to the stream. PdfWriter.GetInstance(d, stream); /* add an image as bg*/ iTextSharp.text.Image jpg = iTextSharp.text.Image.GetInstance(Server.MapPath("Image/bg.png")); jpg.Alignment = iTextSharp.text.Image.UNDERLYING; jpg.SetAbsolutePosition(0, 0); //this is the size of my background letter head image. the size is in points. this will fit to A4 size document. jpg.ScaleToFit(595, 842); d.Open(); d.Add(jpg); d.Add(new Paragraph(String.Format("Dear {0} {1},", title, fullname))); d.Add(new Paragraph("\n")); d.Add(new Paragraph(msg)); d.Add(new Paragraph("\n")); d.Add(new Paragraph(String.Format("Administrator"))); d.Close(); } //read the file data byte[] b; using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { BinaryReader rdr = new BinaryReader(stream); b = new byte[mmf.CreateViewStream().Length]; rdr.Read(b, 0, (int)mmf.CreateViewStream().Length); } Response.Clear(); Response.ContentType = "Application/pdf"; Response.BinaryWrite(b); Response.End(); } Press ctrl + f5 to run the application. First I got the user list. Click on the generate pdf icon. The created looks as follows. Summary: Creating pdf document using iTextSharp is easy. You will get lot of information while surfing the www. Some useful resources and references are mentioned below http://itextsharp.com/ http://www.mikesdotnetting.com/Article/82/iTextSharp-Adding-Text-with-Chunks-Phrases-and-Paragraphs http://somewebguy.wordpress.com/2009/05/08/itextsharp-simplify-your-html-to-pdf-creation/ Hope you enjoyed the article.

    Read the article

  • Find CheckBox from GridView in Content Page/Master Page

    - by Suthish Nair
    How to find a control from GridView which resides in Content Page Here the example using to find the CheckBox, hope this will help you all... .aspx code <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" Runat="Server"> <asp:GridView ID="GridView1" runat="server"> <Columns> <asp:TemplateField> <ItemTemplate> <asp:CheckBox ID="chkID" runat="server" /> </ItemTemplate> </asp...(read more)

    Read the article

  • What do I need to distribute (keys, certs) for Python w/ SSL-socket connection?

    - by fandingo
    I'm trying to write a generic server-client application that will be able to exchange data amongst servers. I've read over quite a few OpenSSL documents, and I have successfully setup my own CA and created a cert (and private key) for testing purposes. I'm stuck with Python 2.3, so I can't use the standard "ssl" library. Instead, I'm stuck with PyOpenSSL, which doesn't seem bad, but there aren't many documents out there about it. My question isn't really about getting it working. I'm more confused about the certificates and where they need to go. Here are my two programs that do work: Server: #!/bin/env python from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER|SSL.VERIFY_FAIL_IF_NO_PEER_CERT, verify_cb) # ?????? ctx.use_privatekey_file('./Dmgr-key.pem') ctx.use_certificate_file('Dmgr-cert.pem') # ?????? ctx.load_verify_locations('./CAcert.pem') server = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) server.bind(('', 50000)) server.listen(3) a, b = server.accept() c = a.recv(1024) print(c) Client: from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER, verify_cb) # ?????????? ctx.use_privatekey_file('/home/justin/code/work/CA/private/Dmgr-key.pem') ctx.use_certificate_file('/home/justin/code/work/CA/Dmgr-cert.pem') # ????????? ctx.load_verify_locations('/home/justin/code/work/CA/CAcert.pem') sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.connect(('10.0.0.3', 50000)) a = Tester(2, 2) b = pickle.dumps(a) sock.send("Hello, world") sock.flush() sock.send(b) sock.shutdown() sock.close() I found this information from ftp://ftp.pbone.net/mirror/ftp.pld-linux.org/dists/2.0/PLD/i586/PLD/RPMS/python-pyOpenSSL-examples-0.6-2.i586.rpm which contains some example scripts. As you might gather, I don't fully understand the sections between the " # ????????." I don't get why the certificate and private key are needed on both the client and server. I'm not sure where each should go, but shouldn't I only need to distribute one part of the key (probably the public part)? It undermines the purpose of having asymmetric keys if you still need both on each server, right? I tried alternating removing either the pkey or cert on either box, and I get the following error no matter which I remove: OpenSSL.SSL.Error: [('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure'), ('SSL routines', 'SSL3_WRITE_BYTES', 'ssl handshake failure')] Could someone explain if this is the expected behavior for SSL. Do I really need to distribute the private key and public cert to all my clients? I'm trying to avoid any huge security problems, and leaking private keys would tend to be a big one... Thanks for the help!

    Read the article

  • Creating dynamic breadcrumb in asp.net mvc with mvcsitemap provider

    - by Jalpesh P. Vadgama
    I have done lots breadcrumb kind of things in normal asp.net web forms I was looking for same for asp.net mvc. After searching on internet I have found one great nuget package for mvpsite map provider which can be easily implemented via site map provider. So let’s check how its works. I have create a new MVC 3 web application called breadcrumb and now I am adding a reference of site map provider via nuget package like following. You can find more information about MVC sitemap provider on following URL. https://github.com/maartenba/MvcSiteMapProvid So once you add site map provider. You will find a Mvc.SiteMap file like following. And following is content of that file. <?xml version="1.0" encoding="utf-8" ?> <mvcSiteMap xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://mvcsitemap.codeplex.com/schemas/MvcSiteMap-File-3.0" xsi:schemaLocation="http://mvcsitemap.codeplex.com/schemas/MvcSiteMap-File-3.0 MvcSiteMapSchema.xsd" enableLocalization="true"> <mvcSiteMapNode title="Home" controller="Home" action="Index"> <mvcSiteMapNode title="About" controller="Home" action="About"/> </mvcSiteMapNode> </mvcSiteMap> So now we have added site map so now its time to make breadcrumb dynamic. So as we all know that with in the standard asp.net mvc template we have action link by default for Home and About like following. <div id="menucontainer"> <ul id="menu"> <li>@Html.ActionLink("Home", "Index", "Home")</li> <li>@Html.ActionLink("About", "About", "Home")</li> </ul> </div> Now I want to replace that with our sitemap provider and make it dynamic so I have added the following code. <div id="menucontainer"> @Html.MvcSiteMap().Menu(true) </div> That’s it. This is the magic code @Html.MvcSiteMap will dynamically create breadcrumb for you. Now let’s run this in browser. You can see that it has created breadcrumb dynamically without writing any action link code. So here you can see with MvcSiteMap provider we don’t have to write any code we just need to add menu syntax and rest it will do automatically. That’s it. Hope you liked it. Stay tuned for more till then happy programming.

    Read the article

  • Dynamically creating meta tags in asp.net mvc

    - by Jalpesh P. Vadgama
    As we all know that Meta tag has very important roles in Search engine optimization and if we want to have out site listed with good ranking on search engines then we have to put meta tags. Before some time I have blogged about dynamically creating meta tags in asp.net 2.0/3.5 sites, in this blog post I am going to explain how we can create a meta tag dynamically very easily. To have meta tag dynamically we have to create a meta tag on server-side. So I have created a method like following. public string HomeMetaTags() { System.Text.StringBuilder strMetaTag = new System.Text.StringBuilder(); strMetaTag.AppendFormat(@"<meta content='{0}' name='Keywords'/>","Home Action Keyword"); strMetaTag.AppendFormat(@"<meta content='{0}' name='Descption'/>", "Home Description Keyword"); return strMetaTag.ToString(); } Here you can see that I have written a method which will return a string with meta tags. Here you can write any logic you can fetch it from the database or you can even fetch it from xml based on key passed. For the demo purpose I have written that hardcoded. So it will create a meta tag string and will return it. Now I am going to store that meta tag in ViewBag just like we have a title tag. In this post I am going to use standard template so we have our title tag there in viewbag message. Same way I am going save meta tag like following in ViewBag. public ActionResult Index() { ViewBag.Message = "Welcome to ASP.NET MVC!"; ViewBag.MetaTag = HomeMetaTags(); return View(); } Here in the above code you can see that I have stored MetaTag ViewBag. Now as I am using standard ASP.NET MVC3 template so we have our we have out head element in Shared folder _layout.cshtml file. So to render meta tag I have modified the Head tag part of _layout.cshtml like following. <head> <title>@ViewBag.Title</title> <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.5.1.min.js")" type="text/javascript"></script> @Html.Raw(ViewBag.MetaTag) </head> Here in the above code you can see I have use @Html.Raw method to embed meta tag in _layout.cshtml page. This HTML.Raw method will embed output to head tag section without encoding html. As we have already taken care of html tag in string function we don’t need the html encoding. Now it’s time to run application in browser. Now once you run your application in browser and click on view source you will find meta tag for home page as following. That’s its It’s very easy to create dynamically meta tag. Hope you liked it.. Stay tuned for more.. Till then happy programming.

    Read the article

  • ASP.NET control in one content area needs to reference a control in another content area

    - by harrije
    I have a master page that divides the main content into two areas. There are two asp:ContentPlaceHolder controls in the body section of the master page with IDs cphMain and cphSideBar respectively. One of the corresponding content pages has a control in cphMain that needs to refer to a control in cphSideBar. Specifically, a SqlDataSource in cphMain references a TextBox in cphSideBar to use as a parameter in the select command. When the content page loads the following run-time error occurs: Could not find control 'TextBox1' in ControlParameter 'date_con'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Could not find control 'TextBox1' in ControlParameter 'date_con'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: Could not find control 'TextBox1' in ControlParameter 'date_con'.] System.Web.UI.WebControls.ControlParameter.Evaluate(HttpContext context, Control control) +1753150 System.Web.UI.WebControls.Parameter.UpdateValue(HttpContext context, Control control) +47 System.Web.UI.WebControls.ParameterCollection.UpdateValues(HttpContext context, Control control) +114 System.Web.UI.WebControls.SqlDataSource.LoadCompleteEventHandler(Object sender, EventArgs e) +43 System.EventHandler.Invoke(Object sender, EventArgs e) +0 System.Web.UI.Page.OnLoadComplete(EventArgs e) +8698566 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +735 I kinda know what the problem is... ASP.NET does not like the fact that the SqlDataSource and TextBox are in different asp:Content controls within the content page. As a workaround, I have another TextBox in cphMain with the SqlDataSource which has Visible=False. Then in the Page_Load() event handler the contents of the TextBox in cphSideBar is copied into the contents of the non-visible TextBox in cphMain. I get the results I want with the work around I've come up with, but it seems like such a hack. I was wondering if there is a better solution I'm missing. Please advise.

    Read the article

  • NuGet package manager in Visual Studio 2012

    - by sreejukg
    NuGet is a package manager that helps developers to automate the process of installing and upgrading packages in Visual Studio projects. It is free and open source. You can see the project in codeplex from the below link. http://nuget.codeplex.com/ Now days developers needed to work with several packages or libraries from various sources, a typical e.g. is jQuery. You will hardly find a website that not uses jQuery. When you include these packages as manually copying the files, it is difficult to task to update these files as new versions get released. NuGet is a Visual studio add on, that comes by default with Visual Studio 2012 that manages such packages. So by using NuGet, you can include new packages to you project as well as update existing ones with the latest versions. NuGet is a Visual Studio extension, and happy news for developers, it is shipped with Visual Studio 2012 by default. In this article, I am going to demonstrate how you can include jQuery (or anything similar) to a .Net project using the NuGet package manager. I have Visual Studio 2012, and I created an empty ASP.Net web application. In the solution explorer, the project looks like following. Now I need to add jQuery for this project, for this I am going to use NuGet. From solution explorer, right click the project, you will see “Manage NuGet Packages” Click on the Manage NuGet Packages options so that you will get the NuGet Package manager dialog. Since there is no package installed in my project, you will see “no packages installed” message. From the left menu, select the online option, and in the Search box (that is available in the top right corner) enter the name of the package you are looking for. In my case I just entered jQuery. Now NuGet package manager will search online and bring all the available packages that match my search criteria. You can select the right package and use the Install button just next to the package details. Also in the right pane, it will show the link to project information and license terms, you can see more details of the project you are looking for from the provided links. Now I have selected to install jQuery. Once installed successfully, you can find the green icon next to it that tells you the package has been installed successfully to your project. Now if you go to the Installed packages link from the left menu of package manager, you can see jQuery is installed and you can uninstall it by just clicking on the Uninstall button. Now close the package manager dialog and let us examine the project in solution explorer. You can see some new entries in your project. One is Scripts folder where the jQuery got installed, and a packages.config file. The packages.config is xml file that tells the NuGet package manager, the id and the version of the package you install. Based on this file NuGet package manager will identify the installed packages and the corresponding versions. Installing packages using NuGet package manager will save lot of time for developers and developers can get upgrades for the installed packages very easily.

    Read the article

  • Rendering ASP.NET MVC Views to String

    - by Rick Strahl
    It's not uncommon in my applications that I require longish text output that does not have to be rendered into the HTTP output stream. The most common scenario I have for 'template driven' non-Web text is for emails of all sorts. Logon confirmations and verifications, email confirmations for things like orders, status updates or scheduler notifications - all of which require merged text output both within and sometimes outside of Web applications. On other occasions I also need to capture the output from certain views for logging purposes. Rather than creating text output in code, it's much nicer to use the rendering mechanism that ASP.NET MVC already provides by way of it's ViewEngines - using Razor or WebForms views - to render output to a string. This is nice because it uses the same familiar rendering mechanism that I already use for my HTTP output and it also solves the problem of where to store the templates for rendering this content in nothing more than perhaps a separate view folder. The good news is that ASP.NET MVC's rendering engine is much more modular than the full ASP.NET runtime engine which was a real pain in the butt to coerce into rendering output to string. With MVC the rendering engine has been separated out from core ASP.NET runtime, so it's actually a lot easier to get View output into a string. Getting View Output from within an MVC Application If you need to generate string output from an MVC and pass some model data to it, the process to capture this output is fairly straight forward and involves only a handful of lines of code. The catch is that this particular approach requires that you have an active ControllerContext that can be passed to the view. This means that the following approach is limited to access from within Controller methods. Here's a class that wraps the process and provides both instance and static methods to handle the rendering:/// <summary> /// Class that renders MVC views to a string using the /// standard MVC View Engine to render the view. /// /// Note: This class can only be used within MVC /// applications that have an active ControllerContext. /// </summary> public class ViewRenderer { /// <summary> /// Required Controller Context /// </summary> protected ControllerContext Context { get; set; } public ViewRenderer(ControllerContext controllerContext) { Context = controllerContext; } /// <summary> /// Renders a full MVC view to a string. Will render with the full MVC /// View engine including running _ViewStart and merging into _Layout /// </summary> /// <param name="viewPath"> /// The path to the view to render. Either in same controller, shared by /// name or as fully qualified ~/ path including extension /// </param> /// <param name="model">The model to render the view with</param> /// <returns>String of the rendered view or null on error</returns> public string RenderView(string viewPath, object model) { return RenderViewToStringInternal(viewPath, model, false); } /// <summary> /// Renders a partial MVC view to string. Use this method to render /// a partial view that doesn't merge with _Layout and doesn't fire /// _ViewStart. /// </summary> /// <param name="viewPath"> /// The path to the view to render. Either in same controller, shared by /// name or as fully qualified ~/ path including extension /// </param> /// <param name="model">The model to pass to the viewRenderer</param> /// <returns>String of the rendered view or null on error</returns> public string RenderPartialView(string viewPath, object model) { return RenderViewToStringInternal(viewPath, model, true); } public static string RenderView(string viewPath, object model, ControllerContext controllerContext) { ViewRenderer renderer = new ViewRenderer(controllerContext); return renderer.RenderView(viewPath, model); } public static string RenderPartialView(string viewPath, object model, ControllerContext controllerContext) { ViewRenderer renderer = new ViewRenderer(controllerContext); return renderer.RenderPartialView(viewPath, model); } protected string RenderViewToStringInternal(string viewPath, object model, bool partial = false) { // first find the ViewEngine for this view ViewEngineResult viewEngineResult = null; if (partial) viewEngineResult = ViewEngines.Engines.FindPartialView(Context, viewPath); else viewEngineResult = ViewEngines.Engines.FindView(Context, viewPath, null); if (viewEngineResult == null) throw new FileNotFoundException(Properties.Resources.ViewCouldNotBeFound); // get the view and attach the model to view data var view = viewEngineResult.View; Context.Controller.ViewData.Model = model; string result = null; using (var sw = new StringWriter()) { var ctx = new ViewContext(Context, view, Context.Controller.ViewData, Context.Controller.TempData, sw); view.Render(ctx, sw); result = sw.ToString(); } return result; } } The key is the RenderViewToStringInternal method. The method first tries to find the view to render based on its path which can either be in the current controller's view path or the shared view path using its simple name (PasswordRecovery) or alternately by its full virtual path (~/Views/Templates/PasswordRecovery.cshtml). This code should work both for Razor and WebForms views although I've only tried it with Razor Views. Note that WebForms Views might actually be better for plain text as Razor adds all sorts of white space into its output when there are code blocks in the template. The Web Forms engine provides more accurate rendering for raw text scenarios. Once a view engine is found the view to render can be retrieved. Views in MVC render based on data that comes off the controller like the ViewData which contains the model along with the actual ViewData and ViewBag. From the View and some of the Context data a ViewContext is created which is then used to render the view with. The View picks up the Model and other data from the ViewContext internally and processes the View the same it would be processed if it were to send its output into the HTTP output stream. The difference is that we can override the ViewContext's output stream which we provide and capture into a StringWriter(). After rendering completes the result holds the output string. If an error occurs the error behavior is similar what you see with regular MVC errors - you get a full yellow screen of death including the view error information with the line of error highlighted. It's your responsibility to handle the error - or let it bubble up to your regular Controller Error filter if you have one. To use the simple class you only need a single line of code if you call the static methods. Here's an example of some Controller code that is used to send a user notification to a customer via email in one of my applications:[HttpPost] public ActionResult ContactSeller(ContactSellerViewModel model) { InitializeViewModel(model); var entryBus = new busEntry(); var entry = entryBus.LoadByDisplayId(model.EntryId); if ( string.IsNullOrEmpty(model.Email) ) entryBus.ValidationErrors.Add("Email address can't be empty.","Email"); if ( string.IsNullOrEmpty(model.Message)) entryBus.ValidationErrors.Add("Message can't be empty.","Message"); model.EntryId = entry.DisplayId; model.EntryTitle = entry.Title; if (entryBus.ValidationErrors.Count > 0) { ErrorDisplay.AddMessages(entryBus.ValidationErrors); ErrorDisplay.ShowError("Please correct the following:"); } else { string message = ViewRenderer.RenderView("~/views/template/ContactSellerEmail.cshtml",model, ControllerContext); string title = entry.Title + " (" + entry.DisplayId + ") - " + App.Configuration.ApplicationName; AppUtils.SendEmail(title, message, model.Email, entry.User.Email, false, false)) } return View(model); } Simple! The view in this case is just a plain MVC view and in this case it's a very simple plain text email message (edited for brevity here) that is created and sent off:@model ContactSellerViewModel @{ Layout = null; }re: @Model.EntryTitle @Model.ListingUrl @Model.Message ** SECURITY ADVISORY - AVOID SCAMS ** Avoid: wiring money, cross-border deals, work-at-home ** Beware: cashier checks, money orders, escrow, shipping ** More Info: @(App.Configuration.ApplicationBaseUrl)scams.html Obviously this is a very simple view (I edited out more from this page to keep it brief) -  but other template views are much more complex HTML documents or long messages that are occasionally updated and they are a perfect fit for Razor rendering. It even works with nested partial views and _layout pages. Partial Rendering Notice that I'm rendering a full View here. In the view I explicitly set the Layout=null to avoid pulling in _layout.cshtml for this view. This can also be controlled externally by calling the RenderPartial method instead: string message = ViewRenderer.RenderPartialView("~/views/template/ContactSellerEmail.cshtml",model, ControllerContext); with this line of code no layout page (or _viewstart) will be loaded, so the output generated is just what's in the view. I find myself using Partials most of the time when rendering templates, since the target of templates usually tend to be emails or other HTML fragment like output, so the RenderPartialView() method is definitely useful to me. Rendering without a ControllerContext The preceding class is great when you're need template rendering from within MVC controller actions or anywhere where you have access to the request Controller. But if you don't have a controller context handy - maybe inside a utility function that is static, a non-Web application, or an operation that runs asynchronously in ASP.NET - which makes using the above code impossible. I haven't found a way to manually create a Controller context to provide the ViewContext() what it needs from outside of the MVC infrastructure. However, there are ways to accomplish this,  but they are a bit more complex. It's possible to host the RazorEngine on your own, which side steps all of the MVC framework and HTTP and just deals with the raw rendering engine. I wrote about this process in Hosting the Razor Engine in Non-Web Applications a long while back. It's quite a process to create a custom Razor engine and runtime, but it allows for all sorts of flexibility. There's also a RazorEngine CodePlex project that does something similar. I've been meaning to check out the latter but haven't gotten around to it since I have my own code to do this. The trick to hosting the RazorEngine to have it behave properly inside of an ASP.NET application and properly cache content so templates aren't constantly rebuild and reparsed. Anyway, in the same app as above I have one scenario where no ControllerContext is available: I have a background scheduler running inside of the app that fires on timed intervals. This process could be external but because it's lightweight we decided to fire it right inside of the ASP.NET app on a separate thread. In my app the code that renders these templates does something like this:var model = new SearchNotificationViewModel() { Entries = entries, Notification = notification, User = user }; // TODO: Need logging for errors sending string razorError = null; var result = AppUtils.RenderRazorTemplate("~/views/template/SearchNotificationTemplate.cshtml", model, razorError); which references a couple of helper functions that set up my RazorFolderHostContainer class:public static string RenderRazorTemplate(string virtualPath, object model,string errorMessage = null) { var razor = AppUtils.CreateRazorHost(); var path = virtualPath.Replace("~/", "").Replace("~", "").Replace("/", "\\"); var merged = razor.RenderTemplateToString(path, model); if (merged == null) errorMessage = razor.ErrorMessage; return merged; } /// <summary> /// Creates a RazorStringHostContainer and starts it /// Call .Stop() when you're done with it. /// /// This is a static instance /// </summary> /// <param name="virtualPath"></param> /// <param name="binBasePath"></param> /// <param name="forceLoad"></param> /// <returns></returns> public static RazorFolderHostContainer CreateRazorHost(string binBasePath = null, bool forceLoad = false) { if (binBasePath == null) { if (HttpContext.Current != null) binBasePath = HttpContext.Current.Server.MapPath("~/"); else binBasePath = AppDomain.CurrentDomain.BaseDirectory; } if (_RazorHost == null || forceLoad) { if (!binBasePath.EndsWith("\\")) binBasePath += "\\"; //var razor = new RazorStringHostContainer(); var razor = new RazorFolderHostContainer(); razor.TemplatePath = binBasePath; binBasePath += "bin\\"; razor.BaseBinaryFolder = binBasePath; razor.UseAppDomain = false; razor.ReferencedAssemblies.Add(binBasePath + "ClassifiedsBusiness.dll"); razor.ReferencedAssemblies.Add(binBasePath + "ClassifiedsWeb.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Utilities.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Web.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Web.Mvc.dll"); razor.ReferencedAssemblies.Add("System.Web.dll"); razor.ReferencedNamespaces.Add("System.Web"); razor.ReferencedNamespaces.Add("ClassifiedsBusiness"); razor.ReferencedNamespaces.Add("ClassifiedsWeb"); razor.ReferencedNamespaces.Add("Westwind.Web"); razor.ReferencedNamespaces.Add("Westwind.Utilities"); _RazorHost = razor; _RazorHost.Start(); //_RazorHost.Engine.Configuration.CompileToMemory = false; } return _RazorHost; } The RazorFolderHostContainer essentially is a full runtime that mimics a folder structure like a typical Web app does including caching semantics and compiling code only if code changes on disk. It maps a folder hierarchy to views using the ~/ path syntax. The host is then configured to add assemblies and namespaces. Unfortunately the engine is not exactly like MVC's Razor - the expression expansion and code execution are the same, but some of the support methods like sections, helpers etc. are not all there so templates have to be a bit simpler. There are other folder hosts provided as well to directly execute templates from strings (using RazorStringHostContainer). The following is an example of an HTML email template @inherits RazorHosting.RazorTemplateFolderHost <ClassifiedsWeb.SearchNotificationViewModel> <html> <head> <title>Search Notifications</title> <style> body { margin: 5px;font-family: Verdana, Arial; font-size: 10pt;} h3 { color: SteelBlue; } .entry-item { border-bottom: 1px solid grey; padding: 8px; margin-bottom: 5px; } </style> </head> <body> Hello @Model.User.Name,<br /> <p>Below are your Search Results for the search phrase:</p> <h3>@Model.Notification.SearchPhrase</h3> <small>since @TimeUtils.ShortDateString(Model.Notification.LastSearch)</small> <hr /> You can see that the syntax is a little different. Instead of the familiar @model header the raw Razor  @inherits tag is used to specify the template base class (which you can extend). I took a quick look through the feature set of RazorEngine on CodePlex (now Github I guess) and the template implementation they use is closer to MVC's razor but there are other differences. In the end don't expect exact behavior like MVC templates if you use an external Razor rendering engine. This is not what I would consider an ideal solution, but it works well enough for this project. My biggest concern is the overhead of hosting a second razor engine in a Web app and the fact that here the differences in template rendering between 'real' MVC Razor views and another RazorEngine really are noticeable. You win some, you lose some It's extremely nice to see that if you have a ControllerContext handy (which probably addresses 99% of Web app scenarios) rendering a view to string using the native MVC Razor engine is pretty simple. Kudos on making that happen - as it solves a problem I see in just about every Web application I work on. But it is a bummer that a ControllerContext is required to make this simple code work. It'd be really sweet if there was a way to render views without being so closely coupled to the ASP.NET or MVC infrastructure that requires a ControllerContext. Alternately it'd be nice to have a way for an MVC based application to create a minimal ControllerContext from scratch - maybe somebody's been down that path. I tried for a few hours to come up with a way to make that work but gave up in the soup of nested contexts (MVC/Controller/View/Http). I suspect going down this path would be similar to hosting the ASP.NET runtime requiring a WorkerRequest. Brrr…. The sad part is that it seems to me that a View should really not require much 'context' of any kind to render output to string. Yes there are a few things that clearly are required like paths to the virtual and possibly the disk paths to the root of the app, but beyond that view rendering should not require much. But, no such luck. For now custom RazorHosting seems to be the only way to make Razor rendering go outside of the MVC context… Resources Full ViewRenderer.cs source code from Westwind.Web.Mvc library Hosting the Razor Engine for Non-Web Applications RazorEngine on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Take,Skip and Reverse Operator in Linq

    - by Jalpesh P. Vadgama
    I have found three more new operators in Linq which is use full in day to day programming stuff. Take,Skip and Reverse. Here are explanation of operators how it works. Take Operator: Take operator will return first N number of element from entities. Skip Operator: Skip operator will skip N number of element from entities and then return remaining elements as a result. Reverse Operator: As name suggest it will reverse order of elements of entities. Here is the examples of operators where i have taken simple string array to demonstrate that. C#, using GeSHi 1.0.8.6 using System; using System.Collections.Generic; using System.Linq; using System.Text;     namespace ConsoleApplication1 {     class Program     {         static void Main(string[] args)         {             string[] a = { "a", "b", "c", "d" };                           Console.WriteLine("Take Example");             var TkResult = a.Take(2);             foreach (string s in TkResult)             {                 Console.WriteLine(s);             }               Console.WriteLine("Skip Example");             var SkResult = a.Skip(2);             foreach (string s in SkResult)             {                 Console.WriteLine(s);             }               Console.WriteLine("Reverse Example");             var RvResult = a.Reverse();             foreach (string s in RvResult)             {                 Console.WriteLine(s);             }                       }     } } Parsed in 0.020 seconds at 44.65 KB/s Here is the output as expected. hope this will help you.. Technorati Tags: Linq,Linq-To-Sql,ASP.NET,C#.NET

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >