Search Results

Search found 45245 results on 1810 pages for 'html content extraction'.

Page 16/1810 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Replication Services as ETL extraction tool

    - by jorg
    In my last blog post I explained the principles of Replication Services and the possibilities it offers in a BI environment. One of the possibilities I described was the use of snapshot replication as an ETL extraction tool: “Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!” Well I have tried it out and I must say it worked well. I was able to let replication services do work in a fraction of the time it would cost me to do the same in SSIS. What I did was the following: Configure snapshot replication for some Adventure Works tables, this was quite simple and straightforward. Create an SSIS package that executes the snapshot replication on demand and waits for its completion. This is something that you can’t do with out of the box functionality. While configuring the snapshot replication two SQL Agent Jobs are created, one for the creation of the snapshot and one for the distribution of the snapshot. Unfortunately these jobs are  asynchronous which means that if you execute them they immediately report back if the job started successfully or not, they do not wait for completion and report its result afterwards. So I had to create an SSIS package that executes the jobs and waits for their completion before the rest of the ETL process continues. Fortunately I was able to create the SSIS package with the desired functionality. I have made a step-by-step guide that will help you configure the snapshot replication and I have uploaded the SSIS package you need to execute it. Configure snapshot replication   The first step is to create a publication on the database you want to replicate. Connect to SQL Server Management Studio and right-click Replication, choose for New.. Publication…   The New Publication Wizard appears, click Next Choose your “source” database and click Next Choose Snapshot publication and click Next   You can now select tables and other objects that you want to publish Expand Tables and select the tables that are needed in your ETL process In the next screen you can add filters on the selected tables which can be very useful. Think about selecting only the last x days of data for example. Its possible to filter out rows and/or columns. In this example I did not apply any filters. Schedule the Snapshot Agent to run at a desired time, by doing this a SQL Agent Job is created which we need to execute from a SSIS package later on. Next you need to set the Security Settings for the Snapshot Agent. Click on the Security Settings button.   In this example I ran the Agent under the SQL Server Agent service account. This is not recommended as a security best practice. Fortunately there is an excellent article on TechNet which tells you exactly how to set up the security for replication services. Read it here and make sure you follow the guidelines!   On the next screen choose to create the publication at the end of the wizard Give the publication a name (SnapshotTest) and complete the wizard   The publication is created and the articles (tables in this case) are added Now the publication is created successfully its time to create a new subscription for this publication.   Expand the Replication folder in SSMS and right click Local Subscriptions, choose New Subscriptions   The New Subscription Wizard appears   Select the publisher on which you just created your publication and select the database and publication (SnapshotTest)   You can now choose where the Distribution Agent should run. If it runs at the distributor (push subscriptions) it causes extra processing overhead. If you use a separate server for your ETL process and databases choose to run each agent at its subscriber (pull subscriptions) to reduce the processing overhead at the distributor. Of course we need a database for the subscription and fortunately the Wizard can create it for you. Choose for New database   Give the database the desired name, set the desired options and click OK You can now add multiple SQL Server Subscribers which is not necessary in this case but can be very useful.   You now need to set the security settings for the Distribution Agent. Click on the …. button Again, in this example I ran the Agent under the SQL Server Agent service account. Read the security best practices here   Click Next   Make sure you create a synchronization job schedule again. This job is also necessary in the SSIS package later on. Initialize the subscription at first synchronization Select the first box to create the subscription when finishing this wizard Complete the wizard by clicking Finish The subscription will be created In SSMS you see a new database is created, the subscriber. There are no tables or other objects in the database available yet because the replication jobs did not ran yet. Now expand the SQL Server Agent, go to Jobs and search for the job that creates the snapshot:   Rename this job to “CreateSnapshot” Now search for the job that distributes the snapshot:   Rename this job to “DistributeSnapshot” Create an SSIS package that executes the snapshot replication We now need an SSIS package that will take care of the execution of both jobs. The CreateSnapshot job needs to execute and finish before the DistributeSnapshot job runs. After the DistributeSnapshot job has started the package needs to wait until its finished before the package execution finishes. The Execute SQL Server Agent Job Task is designed to execute SQL Agent Jobs from SSIS. Unfortunately this SSIS task only executes the job and reports back if the job started succesfully or not, it does not report if the job actually completed with success or failure. This is because these jobs are asynchronous. The SSIS package I’ve created does the following: It runs the CreateSnapshot job It checks every 5 seconds if the job is completed with a for loop When the CreateSnapshot job is completed it starts the DistributeSnapshot job And again it waits until the snapshot is delivered before the package will finish successfully Quite simple and the package is ready to use as standalone extract mechanism. After executing the package the replicated tables are added to the subscriber database and are filled with data:   Download the SSIS package here (SSIS 2008) Conclusion In this example I only replicated 5 tables, I could create a SSIS package that does the same in approximately the same amount of time. But if I replicated all the 70+ AdventureWorks tables I would save a lot of time and boring work! With replication services you also benefit from the feature that schema changes are applied automatically which means your entire extract phase wont break. Because a snapshot is created using the bcp utility (bulk copy) it’s also quite fast, so the performance will be quite good. Disadvantages of using snapshot replication as extraction tool is the limitation on source systems. You can only choose SQL Server or Oracle databases to act as a publisher. So if you plan to build an extract phase for your ETL process that will invoke a lot of tables think about replication services, it would save you a lot of time and thanks to the Extract SSIS package I’ve created you can perfectly fit it in your usual SSIS ETL process.

    Read the article

  • Is there a built-in jQuery function for encoding a string as HTML?

    - by Ben McCormack
    Is there a built-in jQuery function for encoding a string as HTML? I'm trying to take the text a user types into a text box and then put that text into a different area of the page. My plan was to take the .val() from the text box and supply that to the .html() of the <div> element. Perhaps there's a good jQuery plugin to help with this (if it's not built-in) or a better way overall to accomplish this goal.

    Read the article

  • How to improve the HTML formatting in Evolution mail client

    - by Tom
    I have a question about viewing HTML emails in the Evolution mail client. Basically, I am receiving some emails that look lovely in Thunderbird but not in Evolution because the HTML rendering of Evolution isn't as advanced. Does anyone know how to improve the HTML rendering of Evolution? e.g. a plugin, tip, code patch, etc... The closest I've got is to right-click the email, "Save As...", save as a html file, then open in Firefox. Not exactly streamline! What emails can't it display well? We use the subversion revision control system which is set up to send an email whenever someone commits via svnnotify all nicely coloured via the --handler HTML::ColorDiff -d parameter. When Evolution fails to use the colours, I find it very had to read the raw diff.

    Read the article

  • text extraction from video game dialogue files [on hold]

    - by wdwvt1
    As part of an academic project, I am trying to access the dialogue files (whether audio or text) from a variety of sports video games (Madden or NBA 2kX would be fantastic). I have searched extensively on other sites (scholarly text-mining publications, r/gaming, r/madden, modding sites, etc.) for guidance in how to extract dialogue files, but have been unsuccessful. Given that I don't have even the domain specific language to ask the right question (i.e. the resources I am seeking are out there, I just can't find them) I am asking the SE game dev community for help with the 2 following questions: Is there a canonical resource that I should study that would get me started with how to extract text or audio files from games? I am very fluent in python, which usually excels at mining information from sources, but I struggle with knowing where to start with a video game (as opposed to a more familiar database with a defined API). Is this even feasible, or are protections included with newer games (e.g. NBA 2k13) going to make extraction of these resources in a programmatic way impossible? Thank you for your help!

    Read the article

  • Hide table row based on content of table cell

    - by timkl
    I want to make some jQuery that shows some table rows and hides others based on the content of the first table cell in each row. When I click a list item I want jQuery to check if the first letter of the item matches the first letter in any table cell in my markup, if so the parent table row should be shown and other rows should be hidden. This is my markup: <ul> <li>A</li> <li>B</li> <li>G</li> </ul> <table> <tr> <td>Alpha1</td> <td>Some content</td> </tr> <tr> <td>Alpha2</td> <td>Some content</td> </tr> <tr> <td>Alpha3</td> <td>Some content</td> </tr> <tr> <td>Beta1</td> <td>Some content</td> </tr> <tr> <td>Beta2</td> <td>Some content</td> </tr> <tr> <td>Beta3</td> <td>Some content</td> </tr> <tr> <td>Gamma1</td> <td>Some content</td> </tr> <tr> <td>Gamma2</td> <td>Some content</td> </tr> <tr> <td>Gamma3</td> <td>Some content</td> </tr> </table> So if I press "A" this is what is rendered in the browser: <ul> <li>A</li> <li>B</li> <li>G</li> </ul> <table> <tr> <td>Alpha1</td> <td>Some content</td> </tr> <tr> <td>Alpha2</td> <td>Some content</td> </tr> <tr> <td>Alpha3</td> <td>Some content</td> </tr> </table> I'm really new to jQuery so any hint on how to go about a problem like this would be appreciated :)

    Read the article

  • Apache/2.2.20 (Ubuntu 11.10) gzip compression won't work on php pages, content is chunked

    - by FamousInteractive
    I'm running into a problem with a new production server whereto I'm transferring projects. The HTML output of the PHP applications isn't compressed by the Apache mod_deflate module. Other resources, as stylesheet and javascript files, even html pages, which are served with the same Content-type (text/html) as the PHP output, are compressed! The projects use the following rules (from HTML5 boilerplate) in the .htaccess: <IfModule mod_deflate.c> # Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/ <IfModule mod_setenvif.c> <IfModule mod_headers.c> SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding </IfModule> </IfModule> # HTML, TXT, CSS, JavaScript, JSON, XML, HTC: <IfModule filter_module> FilterDeclare COMPRESS FilterProvider COMPRESS DEFLATE resp=Content-Type $text/html FilterProvider COMPRESS DEFLATE resp=Content-Type $text/css FilterProvider COMPRESS DEFLATE resp=Content-Type $text/plain FilterProvider COMPRESS DEFLATE resp=Content-Type $text/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $text/x-component FilterProvider COMPRESS DEFLATE resp=Content-Type $application/javascript FilterProvider COMPRESS DEFLATE resp=Content-Type $application/json FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xhtml+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/rss+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/atom+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/vnd.ms-fontobject FilterProvider COMPRESS DEFLATE resp=Content-Type $image/svg+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $image/x-icon FilterProvider COMPRESS DEFLATE resp=Content-Type $application/x-font-ttf FilterProvider COMPRESS DEFLATE resp=Content-Type $font/opentype FilterChain COMPRESS FilterProtocol COMPRESS DEFLATE change=yes;byteranges=no </IfModule> </IfModule> We have a testing machine that runs the same Apache, OS and PHP version. On that machine the compression works just fine on the PHP output. I've checked and compared Apache and PHP config files, all the same as far as I can tell. I've tried several manners of outputting the content of the PHP, using output buffering or just plain echoing the content. Same thing, no compression. Example response headers of a PHP output: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Accept-Ranges: bytes Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: public Pragma: no-cache Vary: User-Agent Keep-Alive: timeout=5, max=98 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 Example of response headers on a css file: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Last-Modified: Mon, 04 Jul 2011 19:12:36 GMT Vary: Accept-Encoding,User-Agent Content-Encoding: gzip Cache-Control: public Expires: Fri, 25 May 2012 23:30:59 GMT Content-Length: 714 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/css; charset=utf-8 Does anyone has a clue or experienced the same "problem"? thanks!

    Read the article

  • Adding Unobtrusive Validation To MVCContrib Fluent Html

    - by srkirkland
    ASP.NET MVC 3 includes a new unobtrusive validation strategy that utilizes HTML5 data-* attributes to decorate form elements.  Using a combination of jQuery validation and an unobtrusive validation adapter script that comes with MVC 3, those attributes are then turned into client side validation rules. A Quick Introduction to Unobtrusive Validation To quickly show how this works in practice, assume you have the following Order.cs class (think Northwind) [If you are familiar with unobtrusive validation in MVC 3 you can skip to the next section]: public class Order : DomainObject { [DataType(DataType.Date)] public virtual DateTime OrderDate { get; set; }   [Required] [StringLength(12)] public virtual string ShipAddress { get; set; }   [Required] public virtual Customer OrderedBy { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Note the System.ComponentModel.DataAnnotations attributes, which provide the validation and metadata information used by ASP.NET MVC 3 to determine how to render out these properties.  Now let’s assume we have a form which can edit this Order class, specifically let’s look at the ShipAddress property: @Html.LabelFor(x => x.Order.ShipAddress) @Html.EditorFor(x => x.Order.ShipAddress) @Html.ValidationMessageFor(x => x.Order.ShipAddress) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now the Html.EditorFor() method is smart enough to look at the ShipAddress attributes and write out the necessary unobtrusive validation html attributes.  Note we could have used Html.TextBoxFor() or even Html.TextBox() and still retained the same results. If we view source on the input box generated by the Html.EditorFor() call, we get the following: <input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" data-val-required="The ShipAddress field is required." data-val-length-max="12" data-val-length="The field ShipAddress must be a string with a maximum length of 12." data-val="true" class="text-box single-line input-validation-error"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } As you can see, we have data-val-* attributes for both required and length, along with the proper error messages and additional data as necessary (in this case, we have the length-max=”12”). And of course, if we try to submit the form with an invalid value, we get an error on the client: Working with MvcContrib’s Fluent Html The MvcContrib project offers a fluent interface for creating Html elements which I find very expressive and useful, especially when it comes to creating select lists.  Let’s look at a few quick examples: @this.TextBox(x => x.FirstName).Class("required").Label("First Name:") @this.MultiSelect(x => x.UserId).Options(ViewModel.Users) @this.CheckBox("enabled").LabelAfter("Enabled").Title("Click to enable.").Styles(vertical_align => "middle")   @(this.Select("Order.OrderedBy").Options(Model.Customers, x => x.Id, x => x.CompanyName) .Selected(Model.Order.OrderedBy != null ? Model.Order.OrderedBy.Id : "") .FirstOption(null, "--Select A Company--") .HideFirstOptionWhen(Model.Order.OrderedBy != null) .Label("Ordered By:")) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } These fluent html helpers create the normal html you would expect, and I think they make life a lot easier and more readable when dealing with complex markup or select list data models (look ma: no anonymous objects for creating class names!). Of course, the problem we have now is that MvcContrib’s fluent html helpers don’t know about ASP.NET MVC 3’s unobtrusive validation attributes and thus don’t take part in client validation on your page.  This is not ideal, so I wrote a quick helper method to extend fluent html with the knowledge of what unobtrusive validation attributes to include when they are rendered. Extending MvcContrib’s Fluent Html Before posting the code, there are just a few things you need to know.  The first is that all Fluent Html elements implement the IElement interface (MvcContrib.FluentHtml.Elements.IElement), and the second is that the base System.Web.Mvc.HtmlHelper has been extended with a method called GetUnobtrusiveValidationAttributes which we can use to determine the necessary attributes to include.  With this knowledge we can make quick work of extending fluent html: public static class FluentHtmlExtensions { public static T IncludeUnobtrusiveValidationAttributes<T>(this T element, HtmlHelper htmlHelper) where T : MvcContrib.FluentHtml.Elements.IElement { IDictionary<string, object> validationAttributes = htmlHelper .GetUnobtrusiveValidationAttributes(element.GetAttr("name"));   foreach (var validationAttribute in validationAttributes) { element.SetAttr(validationAttribute.Key, validationAttribute.Value); }   return element; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The code is pretty straight forward – basically we use a passed HtmlHelper to get a list of validation attributes for the current element and then add each of the returned attributes to the element to be rendered. The Extension In Action Now let’s get back to the earlier ShipAddress example and see what we’ve accomplished.  First we will use a fluent html helper to render out the ship address text input (this is the ‘before’ case): @this.TextBox("Order.ShipAddress").Label("Ship Address:").Class("class-name") .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } And the resulting HTML: <label id="Order_ShipAddress_Label" for="Order_ShipAddress">Ship Address:</label> <input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" class="class-name"> Now let’s do the same thing except here we’ll use the newly written extension method: @this.TextBox("Order.ShipAddress").Label("Ship Address:") .Class("class-name").IncludeUnobtrusiveValidationAttributes(Html) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } And the resulting HTML: <label id="Order_ShipAddress_Label" for="Order_ShipAddress">Ship Address:</label> <input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" data-val-required="The ShipAddress field is required." data-val-length-max="12" data-val-length="The field ShipAddress must be a string with a maximum length of 12." data-val="true" class="class-name"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Excellent!  Now we can continue to use unobtrusive validation and have the flexibility to use ASP.NET MVC’s Html helpers or MvcContrib’s fluent html helpers interchangeably, and every element will participate in client side validation. Wrap Up Overall I’m happy with this solution, although in the best case scenario MvcContrib would know about unobtrusive validation attributes and include them automatically (of course if it is enabled in the web.config file).  I know that MvcContrib allows you to author global behaviors, but that requires changing the base class of your views, which I am not willing to do. Enjoy!

    Read the article

  • Tracking download of non-html (like pdf) downloads with jQuery and Google Analytics

    - by developerit
    Hi folks, it’s been quite calm at Developer IT’s this summer since we were all involved in other projects, but we are slowly comming back. In this post, we will present a simple way of tracking files download with Google Analytics with the help of jQuery. We work for a client that offers a lot of pdf files to download on their web site and wanted to know which one are the most popular. They use Google Analytics for a long time now and we did not want to have a second interface in order to present those stats to our client. So usign IIS logs was not a idea to consider. Since Google already offers us a splendid web interface and a powerful API, we deceided to hook up simple javascript code into the jQuery click event to notify Analytics that a pdf has been requested. (function ($) { function trackLink(e) { var url = $(this).attr('href'); //alert(url); // for debug purpose // old page tracker code pageTracker._trackPageview(url); // you can use the new one too _gaq.push(["_trackPageview",url]); //always return true, in order for the browser to continue its job return true; } // When DOM ready $(function () { // hook up the click event $('.pdf-links a').click(trackLink); }); })(jQuery); You can be more presice or even be sure not to miss one click by changing the selector which hooks up the click event. I have been usign this code to track AJAX requests and it works flawlessly.

    Read the article

  • Oracle UCM 11g

    - by [email protected]
    Ya se ha lanzado la última versión de Oracle UCM11g. Grandes novedades, sobre todo en la arquitectura del producto, nos hacen ser muy optimistas sobre todo después de ver los resultados de rendimiento y escalabilidad obtenidos.El enlace a toda la información sobre el lanzamiento está aquí:Oracle Enterprise Content Management 11gLas novedades más importantes son:Mejor integración en tu entorno de trabajo: Nueva integración del escritorio: los contenidos se manejan usando herramientas estándares de oficina.Gestión de contenidos web en un clic: que permite a los desarrolladores y editores web acceder y actualizar contenido con un solo clic.Más funcionalidad a través de integraciones con otros productos de Oracle. Unificación del stack tecnológico de gestión de contenidosAhora Oracle ECM Suite 11g unifica todos los repositorios de contenido para facilitar su gestión en una única infraestructura.Infraestructura Oracle Fusion Middleware: Oracle ECM Suite 11g se ha trasladado completamente a la plataforma Oracle Fusion Middleware, con todas las aplicaciones soportadas por Oracle WebLogic Server y gestionado con el cuadro de mando Oracle Enterprise Manager. Rendimiento y escalabilidad ExtremosLos datos de los test de rendimiento son espectaculares corriendo en una máquina Exadata.Podéis ver un vídeo del rendimiento aquí: Bueno... 172 millones de documentos por día!!! y 124 páginas por segundo con 2 cpu's... quien quiere ser el primero en probarlo?

    Read the article

  • Height of a html window's content (not just the viewport height)

    - by gatapia
    Hi All, I'm trying to get the height of a html window's content. This is the full height of the content not the visible height. I have had some (very limited) success using: document.getElementsByTagName('html')[0].offsetHeight in FireFox. This however fails in IEs and it fails in Chrome when using absolute positioned elements (http://code.google.com/p/chromium/issues/detail?id=38999). A sample html file that can be used to reproduce this is: <html> <head> <style> div { border:solid 1px red; height:2000px; width:400px; } .broken { position:absolute; top:0; left:0; } .fixed { position:relative; top:0; left:0; } </style> <script language='javascript'> window.onload = function () { document.getElementById('window.height').innerHTML = window.innerHeight; document.getElementById('window.screen.height').innerHTML = window.screen.height; document.getElementById('document.html.height').innerHTML = document.getElementsByTagName('html')[0].offsetHeight; } </script> </head> <body> <div class='fixed'> window.height: <span id='window.height'>&nbsp;</span> <br/> window.screen.height: <span id='window.screen.height'></span> <br/> document.html.height: <span id='document.html.height'></span> <br/> </div> </body> </html> Thanks All Guido Tapia

    Read the article

  • jQuery - Loading content into div, styles not applied?

    - by Kenny Bones
    Hi! I'm trying to get this content loader to work and I've managed to get it to get new content, once the content is loaded it isn't styled correctly. Also the character "é" becomes a questionmark. Doctype problem? As well as the h2 tag normally having Cufon applied to it is not triggering. So basically, this content loader require me to have a bunch of pages being essentially the same, except for the content I want to retreice. This way, users can use the actual URL as you'd normally exect. Only when a link is clicked on an already loaded page, it's only the content from the #content div that's realle being replaced. I can post code here, but I think it's better to just watch it happen on the testpage. It's very low on graphics btw ;) http://www.matkalenderen.no Just click the blue text link and you'll see it. Also, the red button on the second loaded content is supposed to revert the content back to previous. But it's not being triggered or something. What's happening?

    Read the article

  • Video games, content strategy, and failure - oh my.

    - by Roger Hart
    Last night was the CS London group's event Content Strategy, Manhattan Style. Yes, it's a terrible title, feeling like a self-conscious grasp for chic, sadly commensurate with the venue. Fortunately, this was not commensurate with the event itself, which was lively, relevant, and engaging. Although mostly if you're a consultant. This is a strong strain in current content strategy discourse, and I think we're going to see it remedied quite soon. Not least in Paris on Friday. A lot of the bloggers, speakers, and commentators in the sphere are consultants, or part of agencies and other consulting organisations. A lot of the talk is about how you sell content strategy to your clients. This is completely acceptable. Of course it is. And it's actually useful if that's something you regularly have to do. To an extent, it's even portable to those of us who have to sell content strategy within an organisation. We're still competing for credibility and resource. What we're doing less is living in the beginning of a project. This was touched on by Jeffrey MacIntyre (albeit in a your-clients kind of a way) who described "the day two problem". Companies, he suggested, build websites for launch day, and forget about the need for them to be ongoing entities. Consultants, agencies, or even internal folks on short projects will live through Day Two quite often: the trainwreck moment where somebody realises that even if the content is right (which it often isn't), and on time (which it often isn't), it'll be redundant, outdated, or inaccurate by the end of the week/month/fickle social media attention cycle. The thing about living through a lot of Day Two is that you see a lot of failure. Nothing succeeds like failure? Failure is good. When it's structured right, it's an awesome tool for learning - that's kind of how video games work. I'm chewing over a whole blog post about this, but basically in game-like learning, you try, fail, go round the loop again. Success eventually yields joy. It's a relatively well-known phenomenon. It works best when that failing step is acutely felt, but extremely inexpensive. Dying in Portal is highly frustrating and surprisingly characterful, but the save-points are well designed and the reload unintrusive. The barrier to re-entry into the loop is very low, as is the cost of your failure out in meatspace. So it's easy (and fun) to learn. Yeah, spot the difference with business failure. As an external content strategist, you get to rock up with a big old folder full of other companies' Day Two (and ongoing day two hundred) failures. You can't send the client round the learning loop - although you may well be there because they've been round it once - but you can show other people's round trip. It's not as compelling, but it's not bad. What about internal content strategists? We can still point to things that are wrong, and there are some very compelling tools at our disposal - content inventories, user testing, and analytics, for instance. But if we're picking up big organically sprawling legacy content, Day Two may well be a distant memory, and the felt experience of web content failure is unlikely to be immediate to many people in the organisation. What to do? My hunch here is that the first task is to create something immediate and felt, but that it probably needs to be a success. Something quickly doable and visible - a content problem solved with a measurable business result. Now, that's a tall order; but scrape of the "quickly" and it's the whole reason we're here. At Red Gate, I've started with the text book fear and passion introduction to content strategy. In fact, I just typo'd that as "contempt strategy", and it isn't a bad description. Yelling "look at this, our website is rubbish!" gets you the initial attention, but it doesn't make you many friends. And if you don't produce something pretty sharp-ish, it's easy to lose the momentum you built up for change. The first thing I've done - after the visual content inventory - is to delete a bunch of stuff. About 70% of the SQL Compare web content has gone, in fact. This is a really, really cheap operation. It's visible, and it's powerful. It's cheap because you don't have to create any new content. It's not free, however, because you do have to validate your deletions. This means analytics, actually reading that content, and talking to people whose business purposes that content has to serve. If nobody outside the company uses it, and nobody inside the company thinks they ought to, that's a no-brainer for the delete list. The payoff here is twofold. There's the nebulous hard-to-illustrate "bad content does user experience and brand damage" argument; and there's the "nobody has to spend time (money) maintaining this now" argument. One or both are easily felt, and the second at least should be measurable. But that's just one approach, and I'd be interested to hear from any other internal content strategy folks about how they get buy-in, maintain momentum, and generally get things done.

    Read the article

  • Managed (.net) library with html-tidy like functionality?

    - by Eamon Nerbonne
    Does anybody know of an html cleaner for .NET that can parse html and (for instance) convert it to a more machine friendly format such as xhtml? I've tried the HTML Agility Pack, but that fails to correctly parse even fairly simple examples. To give an example of html that should be parsed correctly: <html><body> <ul><li>TestElem1 <li>TestElem2 <li>TestElem3 List: <ul><li>Nested1 <li>Nested2</li> <li>Nested3 </ul> <li>TestElem4 </ul> <p>paragraph 1 <p>paragraph 2 <p>paragraph 3 </body></html> li tags don't need to be closed (see spec), and neither do P tags. In other words, the above sample should be parsed as: <html><body> <ul><li>TestElem1</li> <li>TestElem2</li> <li>TestElem3 List: <ul><li>Nested1</li> <li>Nested2</li> <li>Nested3</li> </ul></li> <li>TestElem4</li> </ul> <p>paragraph 1</p> <p>paragraph 2</p> <p>paragraph 3</p> </body></html> Since the aim is to use the library on various machines, it's a big disadvantage to need to fall back to native code (such as a wrapper around html tidy) which would require extra deployment hassle and sacrifice platform independance. Any suggestions? To recap, I'm looking for: An html cleaner ala HTML tidy Must be able to deal with real world html, not just xhtml, at the very least correctly reading valid html 4 Must be able to convert to a more easily processable xml format Should be a purely managed app.

    Read the article

  • How do I use Content.Load() with raw XML files?

    - by xnanewb
    I'm using the Content.Load() mechanism to load core game definitions from XML files. It works fine, though some definitions should be editable/moddable by the players. Since the content pipeline compiles everything into xnb files, that doesn't work for now. I've seen that the inbuild XNA Song content processor does create 2 files. 1 xnb file which contains meta data for the song and 1 wma file which contains the actual data. I've tried to rebuild that mechanism (so that the second file is the actual xml file), but for some reason I can't use the namespace which contains the IntermediateSerializer class to load the xml (obviously the namespace is only available in a content project?). How can I deploy raw, editable xml files and load them with Content.Load()?

    Read the article

  • Parsing HTML with Python 2.7 - HTMLParser, SGMLParser, or Beautiful Soup?

    - by Eric Wilson
    I want to do some screen-scraping with Python 2.7, and I have no context for the differences between HTMLParser, SGMLParser, or Beautiful Soup. Are these all trying to solve the same problem, or do they exist for different reasons? Which is simplest, which is most robust, and which (if any) is the default choice? Also, please let me know if I have overlooked a significant option. Edit: I should mention that I'm not particularly experienced in HTML parsing, and I'm particularly interested in which will get me moving the quickest, with the goal of parsing HTML on one particular site.

    Read the article

  • Oracle and Eloqua Welcome Compendium’s Content Marketing

    - by Mike Stiles
    Yesterday, Oracle announced its acquisition of Compendium, a cloud-based content marketing provider that helps companies plan, produce and deliver engaging content across multiple channels throughout their customers' lifecycle. Why? Because every part of the above paragraph speaks to where modern marketing is and where it’s headed. Customers have now been empowered, thanks to the Internet and particularly social, with access to almost limitless amounts of information about companies and products. This includes the especially influential voices of friends and objective acquaintances that have experience with the product or brand. With mobile, this info is available instantly in the palm of their hand. All of this research and influence mind you, is taking place long before a prospect will ever engage with the brand itself or one of its sales reps. So how does a brand effectively insert itself into these conversations and this flow of the customer journey? Now, more than ever, marketers must deliver relevant and engaging content across multiple channels and throughout the entire customer journey to be useful, helpful, and influential. Compendium has a data-driven content marketing platform that lines up relevant content with customer data and personas so brands can accelerate the conversion of prospects. Now think about combining that with the Oracle Eloqua Marketing Cloud, part of Oracle's comprehensive CX solution. Marketers will be able to automate content delivery across channels by aligning persona-based content with customers' digital body language. Better customer engagement, improved sales lead quality, better return on marketing investment, and higher customer loyalty. Now we’re talking. Does data-driven content marketing have an impact? Compendium customer CVENT is a SaaS company specializing in meetings management tech. They wanted to increase leads & ad performance on their blog and dramatically increase their content. They also wanted to manage the creation, workflow, promotion and distribution of that content. With Compendium, CVENT created over 9,000 content elements, and sales-ready leads grew 325%. So Oracle Eloqua helps you target audiences, know buyers, and automate multi-channel marketing campaigns. Compendium lets you plan, publish, manage and measure content across content types and channels. Now kick it up yet another notch with Oracle’s Analytics, Big Data and Social solutions, and you’re using your marketing dollars to reach the right people in the right place at the right time with the right content. And as if that weren’t enough, your customers will love you for it. @mikestiles

    Read the article

  • restore content database in sharepoint server 2007

    - by Boris
    I have a site collection set up at web app running at port 80. I have made the backup of the site collection content db using stsadm.exe tool. Now, I want to restore that backup as a new content db of a different site collection - the one set up at web app running at port 500. I have done the following: Created a backup Created new web app at port 500 (I did not create a site collection for this web app) I have removed the content db of that new web app using Central Administration I have run the stsadm.exe -o addcontentdb -url webapp-at-port-500 -databasename Command is successfully completed, however when I check the Content Database page for that web app, it says that the Number of Sites is 0! Also, when I try to open http://webapp-at-port-500, I get the error saying that the webpage cannot be found. Could anyone please help me, it's driving me crazy. Thanks.

    Read the article

  • How to make MVC 4 Razor Html.Raw work for assignment in HTML within script tags

    - by Yarune
    For a project I'm using jqote for templating in JavaScript and HTML generated by MVC 4 with Razor. Please have a look at the following code in HTML and Razor: <script id="testTemplate" type="text/html"> <p>Some html</p> @{string id = "<%=this.Id%>";} <!-- 1 --> @if(true) { @Html.Raw(@"<select id="""+id+@"""></select>") } <!-- 2 --> @if(true) { <select id="@Html.Raw(id)"></select> } <!-- 3 --> @Html.Raw(@"<select id="""+id+@"""></select>") <!-- 4 --> <select id="@Html.Raw(id)"></select> <!-- 5 --> <select id="<%=this.Id%>"></select> </script> The output is this: <script id="testTemplate" type="text/html"> <!-- 1 --> <select id="<%=this.Id%>"></select> <!--Good!--> <!-- 2 --> <select id="&lt;%=this.Id%&gt;"></select> <!--BAD!--> <!-- 3 --> <select id="<%=this.Id%>"></select> <!--Good!--> <!-- 4 --> <select id="<%=this.Id%>"></select> <!--Good!--> <!-- 5 --> <select id="<%=this.Id%>"></select> <!--Good!--> </script> Now, the problem is with the second select under <!-- 2 -->. One would expect the Html.Raw to kick in here but somehow it doesn't. Or Razor wants to HtmlEncode what's in there. The question is: Does anyone have an idea why? Is this a bug or by design? Without the script tags it works. But we need the script tags cause we need to template in JavaScript. Hardcoded it works, but we need to use a variable because this will not always be a template. Without the @if it works, but it's there, we need it. Workarounds These lines give similar good outputs: @if(true) { <select id= "@Html.Raw(id)"></select> } @if(true) { <select id ="@Html.Raw(id)"></select> } @if(true) { <select id @Html.Raw("=")"@Html.Raw(id)"></select> } We're planning to do this: <script id="testTemplate" type="text/html"> @{string id = @"id=""<%=this.Id%>""";} @if(true) { <select @Html.Raw(id)></select> } </script> ...to keep as to markup intact as much as possible.

    Read the article

  • WIF-less claim extraction from ACS: JWT

    - by Elton Stoneman
    ACS support for JWT still shows as "beta", but it meets the spec and it works nicely, so it's becoming the preferred option as SWT is losing favour. (Note that currently ACS doesn’t support JWT encryption, if you want encrypted tokens you need to go SAML). In my last post I covered pulling claims from an ACS token without WIF, using the SWT format. The JWT format is a little more complex, but you can still inspect claims just with string manipulation. The incoming token from ACS is still presented in the BinarySecurityToken element of the XML payload, with a TokenType of urn:ietf:params:oauth:token-type:jwt: <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:39:55.337Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:19:55.337Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="_1eeb5cf4-b40b-40f2-89e0-a3343f6bd985-6A15D1EED0CDB0D8FA48C7D566232154" ValueType="urn:ietf:params:oauth:token-type:jwt" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">[ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>urn:ietf:params:oauth:token-type:jwt</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> The token as a whole needs to be base-64 decoded. The decoded value contains a header, payload and signature, dot-separated; the parts are also base-64, but they need to be decoded using a no-padding algorithm (implementation and more details in this MSDN article on validating an Exchange 2013 identity token). The values are then in JSON; the header contains the token type and the hashing algorithm: "{"typ":"JWT","alg":"HS256"}" The payload contains the same data as in the SWT, but JSON rather than querystring format: {"aud":"http://localhost/x.y.z" "iss":"https://adfstest-bhw.accesscontrol.windows.net/" "nbf":1346398795 "exp":1346404795 "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant":"2012-08-31T07:39:53.652Z" "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows" "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname":"xyz" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress":"[email protected]" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn":"[email protected]" "identityprovider":"http://fs.svc.x.y.z.com/adfs/services/trust"} The signature is in the third part of the token. Unlike SWT which is fixed to HMAC-SHA-256, JWT can support other protocols (the one in use is specified as the "alg" value in the header). How to: Validate an Exchange 2013 identity token contains an implementation of a JWT parser and validator; apart from the custom base-64 decoding part, it’s very similar to SWT extraction. I've wrapped the basic SWT and JWT in a ClaimInspector.aspx page on gitHub here: SWT and JWT claim inspector. You can drop it into any ASP.Net site and set the URL to be your redirect page in ACS. Swap ACS to issue SWT or JWT, and using the same page you can inspect the claims that come out.

    Read the article

  • Cooking with Wessty: HTML 5 and Visual Studio

    - by David Wesst
    The hardest part about using a new technology, such as HTML 5, is getting to what features are available and the syntax. One way to learn how to use new technologies is to adapt your current development to help you use the technology in comfort of your own development environment. For .NET Web Developers, that environment is usually Visual Studio 2010. This technique intends on showing you how to get HTML 5 Intellisense working in your current version of Visual Studio 2008 or 2010, making it easier for you to start using HTML 5 features in your current .NET web development projects. Quick Note According to the Visual Web Developer team at Microsoft, the Visual Studio 2010 SP1 beta has support for both HTML 5 and CSS 3. If you are willing to try out the bleeding edge update from Microsoft, then you won’t need this technique. --- Ingredients Visual Studio 2008 or 2010 Your favourite HTML 5 compliant browser (e.g. Internet Explorer 9) Administrator privileges, or the ability to install Visual Studio Extensions in your development environment. Directions Download the HTML 5 Intellisense for Visual Studio 2008 and 2010 extension from the Visual Studio Extension Gallery. Install it. Open Visual Studio. Open up a web file, such as an HTML or ASPX file. he HTML Source Editing toolbar should have appeared. (Optional) If it did not appear, you can activate it through the main menu by selecting View, then Toolbars, and then select HTML Source Editing if it does not have a checkbox beside it. (NOTE: If there is a checkbox, then the toolbar is enabled) In the HTML Source Editing toolbar, open up the validation schema drop box, and select HTML 5. Et voila! You now have HTML 5 intellisense enabled to help you get started in adding HTML 5 awesomeness to your web sites and web applications. Optional – Setting HTML 5 Validation Options At this point, you may want to select how Visual Studio shows validation errors. You can do that in the Options Menu. To get to the Options Menu… In the main menu select Tools, then Options. In the Options window, select and expand Text Editor, then HTML, followed by selecting Validation. Resources HTML 5 Intellisense for Visual Studio 2008 and 2010 extenstion Visual Studio Extension Gallery Visual Studio 2010 SP1 Beta This post also appears at http://david.wes.st

    Read the article

  • Creating ASP.NET MVC Negotiated Content Results

    - by Rick Strahl
    In a recent ASP.NET MVC application I’m involved with, we had a late in the process request to handle Content Negotiation: Returning output based on the HTTP Accept header of the incoming HTTP request. This is standard behavior in ASP.NET Web API but ASP.NET MVC doesn’t support this functionality directly out of the box. Another reason this came up in discussion is last week’s announcements of ASP.NET vNext, which seems to indicate that ASP.NET Web API is not going to be ported to the cloud version of vNext, but rather be replaced by a combined version of MVC and Web API. While it’s not clear what new API features will show up in this new framework, it’s pretty clear that the ASP.NET MVC style syntax will be the new standard for all the new combined HTTP processing framework. Why negotiated Content? Content negotiation is one of the key features of Web API even though it’s such a relatively simple thing. But it’s also something that’s missing in MVC and once you get used to automatically having your content returned based on Accept headers it’s hard to go back to manually having to create separate methods for different output types as you’ve had to with Microsoft server technologies all along (yes, yes I know other frameworks – including my own – have done this for years but for in the box features this is relatively new from Web API). As a quick review,  Accept Header content negotiation works off the request’s HTTP Accept header:POST http://localhost/mydailydosha/Editable/NegotiateContent HTTP/1.1 Content-Type: application/json Accept: application/json Host: localhost Content-Length: 76 Pragma: no-cache { ElementId: "header", PageName: "TestPage", Text: "This is a nice header" } If I make this request I would expect to get back a JSON result based on my application/json Accept header. To request XML  I‘d just change the accept header:Accept: text/xml and now I’d expect the response to come back as XML. Now this only works with media types that the server can process. In my case here I need to handle JSON, XML, HTML (using Views) and Plain Text. HTML results might need more than just a data return – you also probably need to specify a View to render the data into either by specifying the view explicitly or by using some sort of convention that can automatically locate a view to match. Today ASP.NET MVC doesn’t support this sort of automatic content switching out of the box. Unfortunately, in my application scenario we have an application that started out primarily with an AJAX backend that was implemented with JSON only. So there are lots of JSON results like this:[Route("Customers")] public ActionResult GetCustomers() { return Json(repo.GetCustomers(),JsonRequestBehavior.AllowGet); } These work fine, but they are of course JSON specific. Then a couple of weeks ago, a requirement came in that an old desktop application needs to also consume this API and it has to use XML to do it because there’s no JSON parser available for it. Ooops – stuck with JSON in this case. While it would have been easy to add XML specific methods I figured it’s easier to add basic content negotiation. And that’s what I show in this post. Missteps – IResultFilter, IActionFilter My first attempt at this was to use IResultFilter or IActionFilter which look like they would be ideal to modify result content after it’s been generated using OnResultExecuted() or OnActionExecuted(). Filters are great because they can look globally at all controller methods or individual methods that are marked up with the Filter’s attribute. But it turns out these filters don’t work for raw POCO result values from Action methods. What we wanted to do for API calls is get back to using plain .NET types as results rather than result actions. That is  you write a method that doesn’t return an ActionResult, but a standard .NET type like this:public Customer UpdateCustomer(Customer cust) { … do stuff to customer :-) return cust; } Unfortunately both OnResultExecuted and OnActionExecuted receive an MVC ContentResult instance from the POCO object. MVC basically takes any non-ActionResult return value and turns it into a ContentResult by converting the value using .ToString(). Ugh. The ContentResult itself doesn’t contain the original value, which is lost AFAIK with no way to retrieve it. So there’s no way to access the raw customer object in the example above. Bummer. Creating a NegotiatedResult This leaves mucking around with custom ActionResults. ActionResults are MVC’s standard way to return action method results – you basically specify that you would like to render your result in a specific format. Common ActionResults are ViewResults (ie. View(vn,model)), JsonResult, RedirectResult etc. They work and are fairly effective and work fairly well for testing as well as it’s the ‘standard’ interface to return results from actions. The problem with the this is mainly that you’re explicitly saying that you want a specific result output type. This works well for many things, but sometimes you do want your result to be negotiated. My first crack at this solution here is to create a simple ActionResult subclass that looks at the Accept header and based on that writes the output. I need to support JSON and XML content and HTML as well as text – so effectively 4 media types: application/json, text/xml, text/html and text/plain. Everything else is passed through as ContentResult – which effecively returns whatever .ToString() returns. Here’s what the NegotiatedResult usage looks like:public ActionResult GetCustomers() { return new NegotiatedResult(repo.GetCustomers()); } public ActionResult GetCustomer(int id) { return new NegotiatedResult("Show", repo.GetCustomer(id)); } There are two overloads of this method – one that returns just the raw result value and a second version that accepts an optional view name. The second version returns the Razor view specified only if text/html is requested – otherwise the raw data is returned. This is useful in applications where you have an HTML front end that can also double as an API interface endpoint that’s using the same model data you send to the View. For the application I mentioned above this was another actual use-case we needed to address so this was a welcome side effect of creating a custom ActionResult. There’s also an extension method that directly attaches a Negotiated() method to the controller using the same syntax:public ActionResult GetCustomers() { return this.Negotiated(repo.GetCustomers()); } public ActionResult GetCustomer(int id) { return this.Negotiated("Show",repo.GetCustomer(id)); } Using either of these mechanisms now allows you to return JSON, XML, HTML or plain text results depending on the Accept header sent. Send application/json you get just the Customer JSON data. Ditto for text/xml and XML data. Pass text/html for the Accept header and the "Show.cshtml" Razor view is rendered passing the result model data producing final HTML output. While this isn’t as clean as passing just POCO objects back as I had intended originally, this approach fits better with how MVC action methods are intended to be used and we get the bonus of being able to specify a View to render (optionally) for HTML. How does it work An ActionResult implementation is pretty straightforward. You inherit from ActionResult and implement the ExecuteResult method to send your output to the ASP.NET output stream. ActionFilters are an easy way to effectively do post processing on ASP.NET MVC controller actions just before the content is sent to the output stream, assuming your specific action result was used. Here’s the full code to the NegotiatedResult class (you can also check it out on GitHub):/// <summary> /// Returns a content negotiated result based on the Accept header. /// Minimal implementation that works with JSON and XML content, /// can also optionally return a view with HTML. /// </summary> /// <example> /// // model data only /// public ActionResult GetCustomers() /// { /// return new NegotiatedResult(repo.Customers.OrderBy( c=> c.Company) ) /// } /// // optional view for HTML /// public ActionResult GetCustomers() /// { /// return new NegotiatedResult("List", repo.Customers.OrderBy( c=> c.Company) ) /// } /// </example> public class NegotiatedResult : ActionResult { /// <summary> /// Data stored to be 'serialized'. Public /// so it's potentially accessible in filters. /// </summary> public object Data { get; set; } /// <summary> /// Optional name of the HTML view to be rendered /// for HTML responses /// </summary> public string ViewName { get; set; } public static bool FormatOutput { get; set; } static NegotiatedResult() { FormatOutput = HttpContext.Current.IsDebuggingEnabled; } /// <summary> /// Pass in data to serialize /// </summary> /// <param name="data">Data to serialize</param> public NegotiatedResult(object data) { Data = data; } /// <summary> /// Pass in data and an optional view for HTML views /// </summary> /// <param name="data"></param> /// <param name="viewName"></param> public NegotiatedResult(string viewName, object data) { Data = data; ViewName = viewName; } public override void ExecuteResult(ControllerContext context) { if (context == null) throw new ArgumentNullException("context"); HttpResponseBase response = context.HttpContext.Response; HttpRequestBase request = context.HttpContext.Request; // Look for specific content types if (request.AcceptTypes.Contains("text/html")) { response.ContentType = "text/html"; if (!string.IsNullOrEmpty(ViewName)) { var viewData = context.Controller.ViewData; viewData.Model = Data; var viewResult = new ViewResult { ViewName = ViewName, MasterName = null, ViewData = viewData, TempData = context.Controller.TempData, ViewEngineCollection = ((Controller)context.Controller).ViewEngineCollection }; viewResult.ExecuteResult(context.Controller.ControllerContext); } else response.Write(Data); } else if (request.AcceptTypes.Contains("text/plain")) { response.ContentType = "text/plain"; response.Write(Data); } else if (request.AcceptTypes.Contains("application/json")) { using (JsonTextWriter writer = new JsonTextWriter(response.Output)) { var settings = new JsonSerializerSettings(); if (FormatOutput) settings.Formatting = Newtonsoft.Json.Formatting.Indented; JsonSerializer serializer = JsonSerializer.Create(settings); serializer.Serialize(writer, Data); writer.Flush(); } } else if (request.AcceptTypes.Contains("text/xml")) { response.ContentType = "text/xml"; if (Data != null) { using (var writer = new XmlTextWriter(response.OutputStream, new UTF8Encoding())) { if (FormatOutput) writer.Formatting = System.Xml.Formatting.Indented; XmlSerializer serializer = new XmlSerializer(Data.GetType()); serializer.Serialize(writer, Data); writer.Flush(); } } } else { // just write data as a plain string response.Write(Data); } } } /// <summary> /// Extends Controller with Negotiated() ActionResult that does /// basic content negotiation based on the Accept header. /// </summary> public static class NegotiatedResultExtensions { /// <summary> /// Return content-negotiated content of the data based on Accept header. /// Supports: /// application/json - using JSON.NET /// text/xml - Xml as XmlSerializer XML /// text/html - as text, or an optional View /// text/plain - as text /// </summary> /// <param name="controller"></param> /// <param name="data">Data to return</param> /// <returns>serialized data</returns> /// <example> /// public ActionResult GetCustomers() /// { /// return this.Negotiated( repo.Customers.OrderBy( c=> c.Company) ) /// } /// </example> public static NegotiatedResult Negotiated(this Controller controller, object data) { return new NegotiatedResult(data); } /// <summary> /// Return content-negotiated content of the data based on Accept header. /// Supports: /// application/json - using JSON.NET /// text/xml - Xml as XmlSerializer XML /// text/html - as text, or an optional View /// text/plain - as text /// </summary> /// <param name="controller"></param> /// <param name="viewName">Name of the View to when Accept is text/html</param> /// /// <param name="data">Data to return</param> /// <returns>serialized data</returns> /// <example> /// public ActionResult GetCustomers() /// { /// return this.Negotiated("List", repo.Customers.OrderBy( c=> c.Company) ) /// } /// </example> public static NegotiatedResult Negotiated(this Controller controller, string viewName, object data) { return new NegotiatedResult(viewName, data); } } Output Generation – JSON and XML Generating output for XML and JSON is simple – you use the desired serializer and off you go. Using XmlSerializer and JSON.NET it’s just a handful of lines each to generate serialized output directly into the HTTP output stream. Please note this implementation uses JSON.NET for its JSON generation rather than the default JavaScriptSerializer that MVC uses which I feel is an additional bonus to implementing this custom action. I’d already been using a custom JsonNetResult class previously, but now this is just rolled into this custom ActionResult. Just keep in mind that JSON.NET outputs slightly different JSON for certain things like collections for example, so behavior may change. One addition to this implementation might be a flag to allow switching the JSON serializer. Html View Generation Html View generation actually turned out to be easier than anticipated. Initially I used my generic ASP.NET ViewRenderer Class that can render MVC views from any ASP.NET application. However it turns out since we are executing inside of an active MVC request there’s an easier way: We can simply create a custom ViewResult and populate its members and then execute it. The code in text/html handling code that renders the view is simply this:response.ContentType = "text/html"; if (!string.IsNullOrEmpty(ViewName)) { var viewData = context.Controller.ViewData; viewData.Model = Data; var viewResult = new ViewResult { ViewName = ViewName, MasterName = null, ViewData = viewData, TempData = context.Controller.TempData, ViewEngineCollection = ((Controller)context.Controller).ViewEngineCollection }; viewResult.ExecuteResult(context.Controller.ControllerContext); } else response.Write(Data); which is a neat and easy way to render a Razor view assuming you have an active controller that’s ready for rendering. Sweet – dependency removed which makes this class self-contained without any external dependencies other than JSON.NET. Summary While this isn’t exactly a new topic, it’s the first time I’ve actually delved into this with MVC. I’ve been doing content negotiation with Web API and prior to that with my REST library. This is the first time it’s come up as an issue in MVC. But as I have worked through this I find that having a way to specify both HTML Views *and* JSON and XML results from a single controller certainly is appealing to me in many situations as we are in this particular application returning identical data models for each of these operations. Rendering content negotiated views is something that I hope ASP.NET vNext will provide natively in the combined MVC and WebAPI model, but we’ll see how this actually will be implemented. In the meantime having a custom ActionResult that provides this functionality is a workable and easily adaptable way of handling this going forward. Whatever ends up happening in ASP.NET vNext the abstraction can probably be changed to support the native features of the future. Anyway I hope some of you found this useful if not for direct integration then as insight into some of the rendering logic that MVC uses to get output into the HTTP stream… Related Resources Latest Version of NegotiatedResult.cs on GitHub Understanding Action Controllers Rendering ASP.NET Views To String© Rick Strahl, West Wind Technologies, 2005-2014Posted in MVC  ASP.NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Groovy pretty print XmlSlurper output from HTML?

    - by Misha Koshelev
    Dear All: I am using several different versions to do this but all seem to result in this error: [Fatal Error] :1:171: The prefix "xmlns" cannot be bound to any namespace explicitly; neither can the namespace for "xmlns" be bound to any prefix explicitly. I load html as: // Load html file def fis=new FileInputStream("2.html") def html=new XmlSlurper(new org.cyberneko.html.parsers.SAXParser()).parseText(fis.text) Versions I've tried: http://johnrellis.blogspot.com/2009/08/hmmm_04.html import groovy.xml.StreamingMarkupBuilder import groovy.xml.XmlUtil def streamingMarkupBuilder=new StreamingMarkupBuilder() println XmlUtil.serialize(streamingMarkupBuilder.bind{mkp.yield html}) http://old.nabble.com/How-to-print-XmlSlurper%27s-NodeChild-with-indentation--td16857110.html // Output import groovy.xml.MarkupBuilder import groovy.xml.StreamingMarkupBuilder import groovy.util.XmlNodePrinter import groovy.util.slurpersupport.NodeChild def printNode(NodeChild node) { def writer = new StringWriter() writer << new StreamingMarkupBuilder().bind { mkp.declareNamespace('':node[0].namespaceURI()) mkp.yield node } new XmlNodePrinter().print(new XmlParser().parseText(writer.toString())) } Any advice? Thank you! Misha

    Read the article

  • ASP.NET Content Web Form - content from placeholder disappears

    - by Naeem Sarfraz
    I'm attempting to set a class on the body tag in my asp.net site which uses a master page and content web forms. I simply want to be able to do this by adding a bodycssclass property (see below) to the content web form page directive. It works through the solution below but when i attempt to view Default.aspx the Content1 control loses its content. Any ideas why? Here is how I'm doing it. I have a master page with the following content: <%@ Master Language="C#" ... %> <html><head>...</head> <body id=ctlBody runat=server> <asp:ContentPlaceHolder ID="cphMain" runat="server" /> </body> </html> it's code behind looks like: public partial class Site : MasterPageBase { public override string BodyCssClass { get { return ctlBody.Attributes["class"]; } set { ctlBody.Attributes["class"] = value; } } } it inherits from: public abstract class MasterPageBase : MasterPage { public abstract string BodyCssClass { get; set; } } my default.aspx is defined as: <%@ Page Title="..." [master page definition etc..] bodycssclass="home" %> <asp:Content ID="Content1" ContentPlaceHolderID="cphMain" runat="server"> Some content </asp:Content> the code behind for this file looks like: public partial class Default : PageBase { ... } and it inherits from : public class PageBase : Page { public string BodyCssClass { get { MasterPageBase mpbCurrent = this.Master as MasterPageBase; return mpbCurrent.BodyCssClass; } set { MasterPageBase mpbCurrent = this.Master as MasterPageBase; mpbCurrent.BodyCssClass = value; } } }

    Read the article

  • Symfony2 Include in html in Twig

    - by Haritz
    I am developing an application usin Symfony2 and twig for templates. I am using a 3 level structure for templates. Base.html.twig, layout.html.twig and childtemplate.html.twig. The problem is I am trying to include one example.html (common html file) in the next child template by using include but it doesnt work properly. Where can the problem be? {# src/Anotatzailea/AnotatzaileaBundle/Resources/views/Page/testuaanotatu.html.twig #} {% extends 'AnotatzaileaAnotatzaileaBundle::layout.html.twig' %} {% block title %}Testua anotatu{% endblock%} {% block body %} {% include "var/www/Symfony/web/example.html" %} {% endblock %}

    Read the article

  • how to find the height of html content

    - by ganapati hegde
    Hi, I am trying to display 10 html pages as a single document,with 10 chapters. I am using Webkitgtk+ engine to render the HTML pages. I am getting the content of each html files and concatenating all of them to create a single 'char *content' and using webkit_web_view_load_html_string(WebKitWebView *web_view, const gchar *content, const gchar *base_uri); this function to load all the HTML files, as a document. Now i am trying to build a 'table of contents(toc)' for this document, which displays all the chapter names in order. My requirement is, when i click on a particular chapter in toc, the document should scroll to that point. For that i need to know height of each chapter ( height of each html file content). The point to be noted here is, the width of the document can changed and as HTML is reflowable, as width increases height decreases and vice-versa. So each time when height changes i need to find out the current height of each HTML page(or chapter) and hence calculate the distance to be scrolled. How can i find out the height of content of HTML page dynamically ? Thank you for the answer....

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >