Search Results

Search found 7748 results on 310 pages for 'iis express'.

Page 310/310 | < Previous Page | 306 307 308 309 310 

  • How do I get my dependenices inject using @Configurable in conjunction with readResolve()

    - by bmatthews68
    The framework I am developing for my application relies very heavily on dynamically generated domain objects. I recently started using Spring WebFlow and now need to be able to serialize my domain objects that will be kept in flow scope. I have done a bit of research and figured out that I can use writeReplace() and readResolve(). The only catch is that I need to look-up a factory in the Spring context. I tried to use @Configurable(preConstruction = true) in conjunction with the BeanFactoryAware marker interface. But beanFactory is always null when I try to use it in my createEntity() method. Neither the default constructor nor the setBeanFactory() injector are called. Has anybody tried this or something similar? I have included relevant class below. Thanks in advance, Brian /* * Copyright 2008 Brian Thomas Matthews Limited. * All rights reserved, worldwide. * * This software and all information contained herein is the property of * Brian Thomas Matthews Limited. Any dissemination, disclosure, use, or * reproduction of this material for any reason inconsistent with the * express purpose for which it has been disclosed is strictly forbidden. */ package com.btmatthews.dmf.domain.impl.cglib; import java.io.InvalidObjectException; import java.io.ObjectStreamException; import java.io.Serializable; import java.lang.reflect.InvocationTargetException; import java.util.HashMap; import java.util.Map; import org.apache.commons.beanutils.PropertyUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.BeanFactory; import org.springframework.beans.factory.BeanFactoryAware; import org.springframework.beans.factory.annotation.Configurable; import org.springframework.util.StringUtils; import com.btmatthews.dmf.domain.IEntity; import com.btmatthews.dmf.domain.IEntityFactory; import com.btmatthews.dmf.domain.IEntityID; import com.btmatthews.dmf.spring.IEntityDefinitionBean; /** * This class represents the serialized form of a domain object implemented * using CGLib. The readResolve() method recreates the actual domain object * after it has been deserialized into Serializable. You must define * &lt;spring-configured/&gt; in the application context. * * @param <S> * The interface that defines the properties of the base domain * object. * @param <T> * The interface that defines the properties of the derived domain * object. * @author <a href="mailto:[email protected]">Brian Matthews</a> * @version 1.0 */ @Configurable(preConstruction = true) public final class SerializedCGLibEntity<S extends IEntity<S>, T extends S> implements Serializable, BeanFactoryAware { /** * Used for logging. */ private static final Logger LOG = LoggerFactory .getLogger(SerializedCGLibEntity.class); /** * The serialization version number. */ private static final long serialVersionUID = 3830830321957878319L; /** * The application context. Note this is not serialized. */ private transient BeanFactory beanFactory; /** * The domain object name. */ private String entityName; /** * The domain object identifier. */ private IEntityID<S> entityId; /** * The domain object version number. */ private long entityVersion; /** * The attributes of the domain object. */ private HashMap<?, ?> entityAttributes; /** * The default constructor. */ public SerializedCGLibEntity() { SerializedCGLibEntity.LOG .debug("Initializing with default constructor"); } /** * Initialise with the attributes to be serialised. * * @param name * The entity name. * @param id * The domain object identifier. * @param version * The entity version. * @param attributes * The entity attributes. */ public SerializedCGLibEntity(final String name, final IEntityID<S> id, final long version, final HashMap<?, ?> attributes) { SerializedCGLibEntity.LOG .debug("Initializing with parameterized constructor"); this.entityName = name; this.entityId = id; this.entityVersion = version; this.entityAttributes = attributes; } /** * Inject the bean factory. * * @param factory * The bean factory. */ public void setBeanFactory(final BeanFactory factory) { SerializedCGLibEntity.LOG.debug("Injected bean factory"); this.beanFactory = factory; } /** * Called after deserialisation. The corresponding entity factory is * retrieved from the bean application context and BeanUtils methods are * used to initialise the object. * * @return The initialised domain object. * @throws ObjectStreamException * If there was a problem creating or initialising the domain * object. */ public Object readResolve() throws ObjectStreamException { SerializedCGLibEntity.LOG.debug("Transforming deserialized object"); final T entity = this.createEntity(); entity.setId(this.entityId); try { PropertyUtils.setSimpleProperty(entity, "version", this.entityVersion); for (Map.Entry<?, ?> entry : this.entityAttributes.entrySet()) { PropertyUtils.setSimpleProperty(entity, entry.getKey() .toString(), entry.getValue()); } } catch (IllegalAccessException e) { throw new InvalidObjectException(e.getMessage()); } catch (InvocationTargetException e) { throw new InvalidObjectException(e.getMessage()); } catch (NoSuchMethodException e) { throw new InvalidObjectException(e.getMessage()); } return entity; } /** * Lookup the entity factory in the application context and create an * instance of the entity. The entity factory is located by getting the * entity definition bean and using the factory registered with it or * getting the entity factory. The name used for the definition bean lookup * is ${entityName}Definition while ${entityName} is used for the factory * lookup. * * @return The domain object instance. * @throws ObjectStreamException * If the entity definition bean or entity factory were not * available. */ @SuppressWarnings("unchecked") private T createEntity() throws ObjectStreamException { SerializedCGLibEntity.LOG.debug("Getting domain object factory"); // Try to use the entity definition bean final IEntityDefinitionBean<S, T> entityDefinition = (IEntityDefinitionBean<S, T>)this.beanFactory .getBean(StringUtils.uncapitalize(this.entityName) + "Definition", IEntityDefinitionBean.class); if (entityDefinition != null) { final IEntityFactory<S, T> entityFactory = entityDefinition .getFactory(); if (entityFactory != null) { SerializedCGLibEntity.LOG .debug("Domain object factory obtained via enity definition bean"); return entityFactory.create(); } } // Try to use the entity factory final IEntityFactory<S, T> entityFactory = (IEntityFactory<S, T>)this.beanFactory .getBean(StringUtils.uncapitalize(this.entityName) + "Factory", IEntityFactory.class); if (entityFactory != null) { SerializedCGLibEntity.LOG .debug("Domain object factory obtained via direct look-up"); return entityFactory.create(); } // Neither worked! SerializedCGLibEntity.LOG.warn("Cannot find domain object factory"); throw new InvalidObjectException( "No entity definition or factory found for " + this.entityName); } }

    Read the article

  • "Error in the Site Data Web Service." when performing crawl

    - by Janis Veinbergs
    Installed SharePoint Services v3 (SP2, october 2009 cumulative updates, Language Pack), attached to a content database I had previously (all works). Installed Search server 2008 Express (with language pack) on top of WSS and crawl does not work. However it works for newly created web application + database. Was playing around with accounts, permissions to try get it working. Currently I have WSS_Crawler account with such permissions: Office Search Server runs with WSS_Crawler account Config database has read permissions for WSS_Crawler Content database has read permissions for WSS_Crawler WSS_Crawler is owner of search database. Added WSS_Crawler to SQL server browser user group and administrator Yes, i'v given more permissions than needed, but it doesn't even work with that and i don't know if its permission problem or what. Crawl log says there is Error in the Site Data Web Service., nothing more. There were known issues with a similar error: Error in the Site Data Web Service. (Value does not fall within the expected range.), but this is not the case as thats an old issue and i hope it has been included in SP2... Logs are from olders to newest (descending order). They don't appear to be very helpful. Crawl log http://serveris Crawled Local Office SharePoint Server sites 3/15/2010 9:39 AM sts3://serveris Crawled Local Office SharePoint Server sites 3/15/2010 9:39 AM sts3://serveris/contentdbid={55180cfa-9d2d-46e4... Crawled Local Office SharePoint Server sites 3/15/2010 9:39 AM http://serveris/test Error in the Site Data Web Service. Local Office SharePoint Server sites 3/15/2010 9:39 AM http://serveris Error in the Site Data Web Service. Local Office SharePoint Server sites 3/15/2010 9:39 AM EventLog No errors in EventLog, just some Information events that Office Server Search provides The search service started. Successfully stored the application configuration registry snapshot in the database. Context: Application 'SharedServices Component: da1288b2-4109-4219-8c0c-3a22802eb842 Catalog: Portal_Content. A master merge was started due to an external request. Component: da1288b2-4109-4219-8c0c-3a22802eb842 A master merge has completed for catalog Portal_Content. Component: da1288b2-4109-4219-8c0c-3a22802eb842 Catalog: AnchorProject. A master merge was started due to an external request. Component: da1288b2-4109-4219-8c0c-3a22802eb842 A master merge has completed for catalog AnchorProject. ULS Log Just some information, but no exceptions, unexpected errors 03/15/2010 09:03:28.28 mssearch.exe (0x1B2C) 0x0E8C Search Server Common GatherStatus 0 Monitorable Insert crawl 771 to inprogress queue hr 0x00000000 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6591 03/15/2010 09:03:28.28 mssearch.exe (0x1B2C) 0x0E8C Search Server Common GatherStatus 0 Monitorable Request Start Crawl 1, project Portal_Content, crawl 771 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:28.28 mssearch.exe (0x1B2C) 0x0E8C Search Server Common GatherStatus 0 Monitorable Advise status change 1, project Portal_Content, crawl 771 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:28.28 w3wp.exe (0x1D98) 0x0958 Search Server Common MS Search Administration 8wn6 Information A full crawl was started on 'Local Office SharePoint Server sites' by BALTICOVO\janis.veinbergs. 03/15/2010 09:03:28.43 mssdmn.exe (0x1750) 0x10F8 ULS Logging Unified Logging Service 8wsv High ULS Init Completed (mssdmn.exe, Microsoft.Office.Server.Native.dll) 03/15/2010 09:03:30.48 mssdmn.exe (0x1750) 0x09C0 Search Server Common MS Search Indexing 8z0v Medium Create CCache 03/15/2010 09:03:30.56 mssdmn.exe (0x1750) 0x09C0 Search Server Common MS Search Indexing 8z0z Medium Create CUserCatalogCache 03/15/2010 09:03:32.06 w3wp.exe (0x1D98) 0x0958 Search Server Common MS Search Administration 90ge Medium SQL: dbo.proc_MSS_PropagationGetQueryServers 03/15/2010 09:03:32.09 w3wp.exe (0x1D98) 0x0958 Search Server Common MS Search Administration 7phq High GetProtocolConfigHelper failed in GetNotesInterface(). 03/15/2010 09:03:34.26 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:35.92 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:37.32 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:37.23 mssdmn.exe (0x1750) 0x1850 Search Server Common MS Search Indexing 8z14 Medium Test TRACE (NULL):(null), (NULL)(null), (CrLf): , end 03/15/2010 09:03:39.04 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:40.98 mssdmn.exe (0x1750) 0x0B24 Search Server Common MS Search Indexing 7how Monitorable GetWebDefaultPage fail. error 2147755542, strWebUrl http://serveris 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Accessor::GetSubWebListItemAccessURL GetAccessURL failed: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:505 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init: GetSubWebListItemAccessURL failed. Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:348 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init fails, Url sts3://serveris/siteurl=test/siteid={390611b2-55f3-4a99-8600-778727177a28}/weburl=/webid={fb0e4bff-65d5-4ded-98d5-fd099456962b}, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:243 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Handler::CreateAccessorExB: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:261 03/15/2010 09:03:40.98 mssdmn.exe (0x1750) 0x1260 Search Server Common MS Search Indexing 7how Monitorable GetWebDefaultPage fail. error 2147755542, strWebUrl http://serveris/test 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Accessor::GetSubWebListItemAccessURL GetAccessURL failed: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:505 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init: GetSubWebListItemAccessURL failed. Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:348 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init fails, Url sts3://serveris/siteurl=/siteid={505443fa-ef12-4f1e-a04b-d5450c939b78}/weburl=/webid={c5a4f8aa-9561-4527-9e1a-b3c23200f11c}, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:243 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Handler::CreateAccessorExB: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:261 03/15/2010 09:03:43.26 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 24, project Portal_Content, crawl 771 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:43.26 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Remove crawl 771 from inprogress queue - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6722 03/15/2010 09:03:43.26 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Insert crawl 772 to inprogress queue hr 0x00000000 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6591 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Request Start Crawl 0, project AnchorProject, crawl 772 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Advise status change 0, project AnchorProject, crawl 772 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Unlock Queue, project Portal_Content - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2922 03/15/2010 09:03:44.82 mssearch.exe (0x1B2C) 0x1DD0 Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:44.95 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:46.51 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:46.39 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.01 mssearch.exe (0x1B2C) 0x1C6C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.87 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.29 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.53 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.67 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.82 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.84 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.89 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.90 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:51.42 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GatherStatus 0 Monitorable Advise status change 4, project AnchorProject, crawl 772 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:51.00 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:51.42 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Remove crawl 772 from inprogress queue - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6722 03/15/2010 09:03:52.96 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Insert crawl 773 to inprogress queue hr 0x00000000 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6591 03/15/2010 09:03:52.96 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Request Start Crawl 0, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Unlock Queue, project AnchorProject - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2922 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Removed start crawl request from Queue 0, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2942 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Request Start Crawl 0, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Advise status change 0, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:55.37 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:55.37 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:56.71 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:56.78 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:58.40 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:58.89 mssearch.exe (0x1B2C) 0x155C Search Server Common GatherStatus 0 Monitorable Advise status change 4, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:58.89 mssearch.exe (0x1B2C) 0x1130 Search Server Common GatherStatus 0 Monitorable Remove crawl 773 from inprogress queue - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6722 03/15/2010 09:03:58.89 mssearch.exe (0x1B2C) 0x1130 Search Server Common GatherStatus 0 Monitorable Unlock Queue, project AnchorProject - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2922 What could be wrong here - any clues?

    Read the article

  • PC powers off at random times

    - by Timo Huovinen
    Short Version After experiencing some problems with Mobo batteries my PC started to power off at random times, the power off is instant and sudden and does not restart afterwards, need help figuring out the cause. Facts: Powers off when PC is playing games Powers off when PC is idle Powers off when PC is in safe mode Powers off when PC is in BIOS Powers off when PC is booted through a Windows installation USB Replaced the motherboard battery several times Replaced the 650W PSU with a 750W PSU Replaced the RAM Swapped the RAM between slots Re-applied thermal paste to the CPU Checked if the motherboard touches the case Nothing is overclocked PC Specs PC specs: OS: Windows 7 Ultimate SP1 RAM: klingston 1333MHz 4GB stick CPU: AMD Phenom II x4 955 Mobo: Gigabyte 88GMA-UD2H rev 2.2 Motherboard battery: CR2032 3v HDD: 500GB Seagate ST3500418AS ATA Device Graphics: ATI/AMD Radeon HD 6870 Very Long version Around 10 months ago I built a brand new gaming PC. Around 6 months ago it's time setting in windows started resetting to the year 2010. I swapped the Motherboard battery for a new one of the exact same size and shape and voltage, and the problems disappeared...for around 2 weeks. Then the same problem happened again, time gets reset, I swapped the battery again, and the problem was gone for good and everything was great for about 3 months.. then another problem started happening, the PC started to power off suddenly and without warning at completely random times, sometimes the PC works for and hour, sometimes 5 minutes. So I read on the forums that it might be either the PSU or the motherboard Battery or RAM or HDD or the Graphics card or the CPU or the motherboard or the drivers or a Virus or Grounding issues, or something short circuiting, basically it can be anything... I spent some days researching, and decided to remove the possibility of a virus. I reset the CMOS, cleared all BIOS settings and reinstalled windows 7 after a full format of the HDD, but the random power off kept happening. I then disabled the restart on error option in windows and looked at the event log for error events, but they did not help me figure out the problem. Network list service depends on network location awareness the dependency group failed to start Source Kernel Power Event 41 Task Category 63 Source Disk Event ID 11 Task Category None The driver detected a controller error on device disk I took apart the PC, every little piece, re-applied some expensive thermal paste to the CPU, and double checked that none of the pieces are touching the PC case. The problem was gone, the PC no longer powered off randomly I re-attached the graphics card and all was good for 4 months... then the power off problem appeared again, but was happening at high intervals, the PC would shutdown once in 2 days on average, at random points in time, sometimes when it's idle all day long, sometimes when it's running CRYSIS 2. I checked the CPU temperature, because I know that AMD CPU's have a built in protection mechanism that switches off the PC if the CPU gets too hot, and the Temp was 50C system temp, and 45C CPU after running the PC all day long (I did not do tests to see if there are any temperature spikes, don't know how to do them) Originally the PSU that powered the PC was 650Watts and had one 4 pin cable to power the CPU, I replaced it with a new 750Watts PSU which has two 4 pin cables for the CPU, but the problem remained. I removed the graphics card and let the motherboard use the built in one, but the PC kept suddenly powering off at random times. I took apart the PC completely again, and re-applied thermal paste to the CPU, added lots of insulation, and checked for any type of short-circuit possibility again and again, but the problem remained. The problem was like that for some months. I replaced the Battery a couple of times over the time, changed lots of options in windows, and tried everything I could, but it kept powering off, so I stopped using the PC as much as I used to, just living with the random power offs from time to time, until a couple of days ago, when the power off happens almost immediately after powering on the PC. I replaced the RAM with a brand new one, but that did not help. Took apart the PC again, checked for anything anywhere that might cause it, found some small scratches on the very edge of the motherboard to the left of the PCI express x16 slot. This might cause the problem, I thought, but the scratch looks very superficial, not deep at all, and if the scratch did harm the motherboard, wouldn't it cause it to not start at all? And why did it start to power off a while ago, and then suddenly stop powering off? The scratches could not have vanished??? did chkdsk \d but it powered off when it was at 75% I removed the hard disks, the graphics card, while I fiddled with the BIOS settings, and suddenly the PC shut down while I was looking at the BIOS version. This makes me realize, it is not caused by: HDD, Windows, Drivers or the Graphics card I cleared the CMOS again, updated the BIOS from F5 to F6f beta, but that did not help, it might even seem that the PC powers off even sooner. The shutdown even happened to me while I booted through a windows 7 installation USB and was in the repair console. I removed one of the cables powering the CPU, now only one 4pin cable powers it, and it worked for 30mins after doing that, which makes me think that it's the CPU overheating, and because it gets less power, it overheats slower? The things that I am still considering: CPU overheating (does not seem to overheat, maybe false readings?) Motherboard short circuiting (faulty motherboard?) I desperately need some advice in what is faulty, is it a faulty Motherboard or an overheating CPU? or maybe something else? I have been breaking my head over this problem over a span of 6 months. I'm not sure if this is a good place to ask this question, if it is not, then tell me where I can get some experienced help. More info I have also discovered a mysterious piece that seems to have fallen out of the motherboard i119.photobucket.com/albums/o126/yurikolovsky/strangepiece.jpg What is it? Looks like each time that it powers off the datetime gets reset I also found another forum post tomshardware.co.uk/forum/… except I don't have Integrated PeripheralsUSB Keyboard Function option in BIOS :S Comments summary (asked by Random moderator) Q. tell me, if the computer restarts, is it immediately? Does it take a second and then restarts? Do you see (BSOD) or hear (PSU, short circuit) any suspicious when it happens? After reading trough it, it remains the mainboard that is faulty. – JohannesM A. Immediate power off, all the fans stop instantly, all the light turn off instantly, no sound or anything, and it remains off until I turn it back on. Thanks for the feedback, faulty motherboard is what I fear. Q. Try stress-testing the system with Prime95 and see if errors or shutdowns occur when the CPU is under full load. – speakr A. Prime95 heat stress test peaked CPU heat at 60C after 5mins, it powered off after 30mins of testing in the middle of the test with no errors, Prime95 Heat test or the stress-testing with low RAM usage (small or in-place FFTs) do not report errors while testing for 10-60 mins. The power off does not seem like it is affected by Prime95 at all Makes me wonder if it's a CPU or Motherboard issue at all. Q. I had similar random/intermittent problems with my old board. It gave one of a few different symptoms: keyboard and/or mouse would die and/or the RAM wouldn't work and/or it would shut down. It was in bad shape. One problems was that my old PSU had literally burned the connector on it (browned around the pins), another was that a broken lead inside the layers of the PCB would work sometimes if it happened to be hot or if I bent the board—by jamming a hunk of wood behind it. I managed to keep the board alive for several years, but eventually nothing I did would make it work correctly anymore. – Synetech A. I will try that as the last resort, ok? ;) Q. Have you tried a different power cord, surge protector, outlet (on a different circuit). It's worth a shot just to ensure it's not subpar wiring or a week circuit (dips in power may cause shutdown if the PSU can't pull enough juice from the wall). – Kyle A. yes, I attached the PC to an entirely different outlet on a different circuit and the problem persists. After connecting it to a different outlet after starting the PC it gave me 3 long beeps and 1 short one, then the PC immediately proceeded to boot up normally. Q. Re-check your mainboard manual and all PSU connections to your mainboard to be sure that nothing is missing (e.g. 12V ATX 4-pin/6-pin connector). If you can provoke shutdowns with Prime95, then consider buying new hardware -- a stable system should run Prime95 for 24h without any errors. Prime95 mentions errors in the log when they occur and gives a summary after the stress test was stopped manually (e.g. "0 errors, 0 warnings", if all is fine) – speakr A. Re-checked, there are no more PSU connectors that I can physically connect, except the one ATX 4-pin (there are 2 that power the CPU) that I disconnected on purpose, I have reconnected it but the problem persists. Q. With one PC I had a short curcuit. The power button on the front plate had its cables soldered, but not isolated, and the contacts were very close to the metal case. A heavier touch was enough to cause a shutdown. The PC's vibration could be enough – ott-- A. yes, it seems to switch off with even the lightest touch, I switched on the PC, then pulled out the front panel power cable that connects to the motherboard so the power button does not work anymore, after 5 mins of working like that, with the power button completely disconnected, just sitting idle, the PC powered off again, I don't think it's the power button. Q. I wonder if you dare to operate components without the case, that is remove motherboard, power, disk ( just put the motherboard on a wooden desk). Don't bend the adapters when running like that. – ott-- A. yes, I do dare to do that, but only tomorrow, too tired/late right now.

    Read the article

  • Intermittent wired network issues in 14.04

    - by Tommy Brunn
    Since yesterday, my wired network connection has been dropping for a couple of seconds every 30 seconds or so. To my knowledge, I had not made any changes to my network. Output of ifconfig -a: ? ~ ifconfig -a eth0 Link encap:Ethernet HWaddr 6c:f0:49:b9:b1:7f inet addr:192.168.0.16 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:feb9:b17f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11597 errors:0 dropped:0 overruns:0 frame:0 TX packets:9783 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10101682 (10.1 MB) TX bytes:1215142 (1.2 MB) Interrupt:48 Base address:0x8000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:96691 errors:0 dropped:0 overruns:0 frame:0 TX packets:96691 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:13594355 (13.5 MB) TX bytes:13594355 (13.5 MB) lspci |grep Ethernet: 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 03) Pinging my router: ? ~ ping 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.435 ms 64 bytes from 192.168.0.1: icmp_seq=2 ttl=64 time=0.571 ms ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable 64 bytes from 192.168.0.1: icmp_seq=8 ttl=64 time=1.03 ms And the output of route: ? ~ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 192.168.0.0 * 255.255.255.0 U 1 0 0 eth0 Some messages from /var/logs/syslog: ? ~ tail -f /var/log/syslog Jun 6 10:37:34 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:37:34 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:37:37 lolbox dnsmasq[1138]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:37:37 lolbox dnsmasq[1362]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:37:39 lolbox dhclient: XMT: Solicit on eth0, interval 8660ms. Jun 6 10:37:39 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:37:39 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:37:47 lolbox dhclient: XMT: Solicit on eth0, interval 16820ms. Jun 6 10:37:47 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:37:47 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:04 lolbox dhclient: XMT: Solicit on eth0, interval 34410ms. Jun 6 10:38:04 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:04 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:16 lolbox NetworkManager[862]: <warn> (eth0): DHCPv6 request timed out. Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13045 Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) scheduled... Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) started... Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): device state change: activated -> failed (reason 'ip-config-unavailable') [100 120 5] Jun 6 10:38:16 lolbox NetworkManager[862]: <info> NetworkManager state is now DISCONNECTED Jun 6 10:38:16 lolbox NetworkManager[862]: <warn> Activation (eth0) failed for connection 'Wired connection 1' Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) complete. Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): device state change: failed -> disconnected (reason 'none') [120 30 0] Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): deactivating device (reason 'none') [0] Jun 6 10:37:34 lolbox whoopsie[1133]: online Jun 6 10:38:16 lolbox whoopsie[1133]: offline Jun 6 10:38:16 lolbox dbus[485]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Jun 6 10:38:16 lolbox dbus[485]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13044 Jun 6 10:38:16 lolbox NetworkManager[862]: <warn> DNS: plugin dnsmasq update failed Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Removing DNS information from /sbin/resolvconf Jun 6 10:38:16 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:38:16 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:16 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:38:16 lolbox avahi-daemon[619]: Withdrawing address record for 192.168.0.16 on eth0. Jun 6 10:38:16 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:38:16 lolbox avahi-daemon[619]: Interface eth0.IPv4 no longer relevant for mDNS. Jun 6 10:38:16 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:38:17 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:17 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:38:17 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:38:18 lolbox dnsmasq[1138]: no servers found in /var/run/dnsmasq/resolv.conf, will retry Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Auto-activating connection 'Wired connection 1'. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) starting connection 'Wired connection 1' Jun 6 10:38:18 lolbox NetworkManager[862]: <info> (eth0): device state change: disconnected -> prepare (reason 'none') [30 40 0] Jun 6 10:38:18 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTING Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) scheduled... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) started... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) scheduled... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) complete. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) starting... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> (eth0): device state change: prepare -> config (reason 'none') [40 50 0] Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) successful. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) scheduled. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) complete. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) started... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> (eth0): device state change: config -> ip-config (reason 'none') [50 70 0] Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv4 transaction (timeout in 45 seconds) Jun 6 10:38:18 lolbox NetworkManager[862]: <info> dhclient started with pid 13160 Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv6 transaction (timeout in 45 seconds) Jun 6 10:38:18 lolbox NetworkManager[862]: <info> dhclient started with pid 13161 Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) complete. Jun 6 10:38:18 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:38:18 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:18 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:38:18 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:38:18 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:38:18 lolbox dhclient: All rights reserved. Jun 6 10:38:18 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:38:18 lolbox dhclient: Jun 6 10:38:19 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:38:19 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:38:19 lolbox dhclient: All rights reserved. Jun 6 10:38:19 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:38:19 lolbox dhclient: Jun 6 10:38:19 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed nbi -> preinit Jun 6 10:38:19 lolbox dhclient: Bound to *:546 Jun 6 10:38:19 lolbox dhclient: Listening on Socket/eth0 Jun 6 10:38:19 lolbox dhclient: Sending on Socket/eth0 Jun 6 10:38:19 lolbox NetworkManager[862]: <info> (eth0): DHCPv6 state changed nbi -> preinit6 Jun 6 10:38:19 lolbox dhclient: Listening on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:38:19 lolbox dhclient: Sending on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:38:19 lolbox dhclient: Sending on Socket/fallback Jun 6 10:38:19 lolbox dhclient: DHCPREQUEST of 192.168.0.16 on eth0 to 255.255.255.255 port 67 (xid=0x3fc9376d) Jun 6 10:38:19 lolbox dhclient: XMT: Solicit on eth0, interval 1020ms. Jun 6 10:38:19 lolbox dhclient: send_packet6: Cannot assign requested address Jun 6 10:38:19 lolbox dhclient: dhc6: send_packet6() sent -1 of 77 bytes Jun 6 10:38:20 lolbox dhclient: DHCPACK of 192.168.0.16 from 192.168.0.1 Jun 6 10:38:20 lolbox dhclient: bound to 192.168.0.16 -- renewal in 41481 seconds. Jun 6 10:38:20 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed preinit -> reboot Jun 6 10:38:20 lolbox NetworkManager[862]: <info> address 192.168.0.16 Jun 6 10:38:20 lolbox NetworkManager[862]: <info> prefix 24 (255.255.255.0) Jun 6 10:38:20 lolbox NetworkManager[862]: <info> gateway 192.168.0.1 Jun 6 10:38:20 lolbox NetworkManager[862]: <info> nameserver '83.255.245.11' Jun 6 10:38:20 lolbox NetworkManager[862]: <info> nameserver '193.150.193.150' Jun 6 10:38:20 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Configure Commit) scheduled... Jun 6 10:38:20 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) started... Jun 6 10:38:20 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:38:20 lolbox avahi-daemon[619]: New relevant interface eth0.IPv4 for mDNS. Jun 6 10:38:20 lolbox avahi-daemon[619]: Registering new address record for 192.168.0.16 on eth0.IPv4. Jun 6 10:38:20 lolbox dhclient: XMT: Solicit on eth0, interval 2110ms. Jun 6 10:38:20 lolbox dhclient: send_packet6: Cannot assign requested address Jun 6 10:38:20 lolbox dhclient: dhc6: send_packet6() sent -1 of 77 bytes Jun 6 10:38:20 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:20 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:38:20 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:38:21 lolbox NetworkManager[862]: <info> (eth0): device state change: ip-config -> secondaries (reason 'none') [70 90 0] Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. Jun 6 10:38:21 lolbox NetworkManager[862]: <info> (eth0): device state change: secondaries -> activated (reason 'none') [90 100 0] Jun 6 10:38:21 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTED_GLOBAL Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Policy set 'Wired connection 1' (eth0) as default for IPv4 routing and DNS. Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Writing DNS information to /sbin/resolvconf Jun 6 10:38:21 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:38:21 lolbox dnsmasq[1362]: using nameserver 127.0.0.1#53 Jun 6 10:38:21 lolbox dnsmasq[1362]: using nameserver 193.150.193.150#53 Jun 6 10:38:21 lolbox dnsmasq[1362]: using nameserver 83.255.245.11#53 Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Activation (eth0) successful, device activated. Jun 6 10:38:21 lolbox whoopsie[1133]: message repeated 2 times: [ offline] Jun 6 10:38:21 lolbox whoopsie[1133]: online Jun 6 10:38:21 lolbox ntpdate[13217]: Can't find host ntp.ubuntu.com: Name or service not known (-2) Jun 6 10:38:21 lolbox ntpdate[13217]: no servers can be used, exiting Jun 6 10:38:22 lolbox dnsmasq[1138]: reading /var/run/dnsmasq/resolv.conf Jun 6 10:38:22 lolbox dnsmasq[1138]: using nameserver 127.0.1.1#53 Jun 6 10:38:22 lolbox dhclient: XMT: Solicit on eth0, interval 4080ms. Jun 6 10:38:22 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:22 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:26 lolbox dhclient: XMT: Solicit on eth0, interval 8450ms. Jun 6 10:38:26 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:26 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:35 lolbox dhclient: XMT: Solicit on eth0, interval 16630ms. Jun 6 10:38:35 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:35 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:51 lolbox dhclient: XMT: Solicit on eth0, interval 34860ms. Jun 6 10:38:51 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:51 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:58 lolbox dnsmasq[1138]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:38:58 lolbox dnsmasq[1362]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:39:04 lolbox NetworkManager[862]: <warn> (eth0): DHCPv6 request timed out. Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13161 Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) scheduled... Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) started... Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): device state change: activated -> failed (reason 'ip-config-unavailable') [100 120 5] Jun 6 10:39:04 lolbox NetworkManager[862]: <info> NetworkManager state is now DISCONNECTED Jun 6 10:39:04 lolbox NetworkManager[862]: <warn> Activation (eth0) failed for connection 'Wired connection 1' Jun 6 10:38:22 lolbox whoopsie[1133]: online Jun 6 10:39:04 lolbox whoopsie[1133]: offline Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) complete. Jun 6 10:39:04 lolbox dbus[485]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): device state change: failed -> disconnected (reason 'none') [120 30 0] Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): deactivating device (reason 'none') [0] Jun 6 10:39:04 lolbox dbus[485]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13160 Jun 6 10:39:04 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:39:04 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:04 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:39:04 lolbox avahi-daemon[619]: Withdrawing address record for 192.168.0.16 on eth0. Jun 6 10:39:04 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:39:04 lolbox avahi-daemon[619]: Interface eth0.IPv4 no longer relevant for mDNS. Jun 6 10:39:04 lolbox NetworkManager[862]: <warn> DNS: plugin dnsmasq update failed Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Removing DNS information from /sbin/resolvconf Jun 6 10:39:04 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:39:05 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:05 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:39:05 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:39:06 lolbox dnsmasq[1138]: no servers found in /var/run/dnsmasq/resolv.conf, will retry Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Auto-activating connection 'Wired connection 1'. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) starting connection 'Wired connection 1' Jun 6 10:39:07 lolbox NetworkManager[862]: <info> (eth0): device state change: disconnected -> prepare (reason 'none') [30 40 0] Jun 6 10:39:07 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTING Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) scheduled... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) started... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) scheduled... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) complete. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) starting... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> (eth0): device state change: prepare -> config (reason 'none') [40 50 0] Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) successful. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) scheduled. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) complete. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) started... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> (eth0): device state change: config -> ip-config (reason 'none') [50 70 0] Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv4 transaction (timeout in 45 seconds) Jun 6 10:39:07 lolbox NetworkManager[862]: <info> dhclient started with pid 13270 Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv6 transaction (timeout in 45 seconds) Jun 6 10:39:07 lolbox NetworkManager[862]: <info> dhclient started with pid 13271 Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) complete. Jun 6 10:39:07 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:39:07 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:07 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:39:07 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:39:07 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:39:07 lolbox dhclient: All rights reserved. Jun 6 10:39:07 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:39:07 lolbox dhclient: Jun 6 10:39:08 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:39:08 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:39:08 lolbox dhclient: All rights reserved. Jun 6 10:39:08 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:39:08 lolbox dhclient: Jun 6 10:39:08 lolbox dhclient: Bound to *:546 Jun 6 10:39:08 lolbox dhclient: Listening on Socket/eth0 Jun 6 10:39:08 lolbox dhclient: Sending on Socket/eth0 Jun 6 10:39:08 lolbox kernel: [ 1446.098590] type=1400 audit(1402043948.002:75): apparmor="DENIED" operation="signal" profile="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=13273 comm="nm-dhcp-client." requested_mask="send" denied_mask="send" signal=term peer="/sbin/dhclient" Jun 6 10:39:08 lolbox kernel: [ 1446.098599] type=1400 audit(1402043948.002:76): apparmor="DENIED" operation="signal" profile="/sbin/dhclient" pid=13273 comm="nm-dhcp-client." requested_mask="receive" denied_mask="receive" signal=term peer="/usr/lib/NetworkManager/nm-dhcp-client.action" Jun 6 10:39:08 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed nbi -> preinit Jun 6 10:39:08 lolbox dhclient: Listening on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:39:08 lolbox dhclient: Sending on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:39:08 lolbox dhclient: Sending on Socket/fallback Jun 6 10:39:08 lolbox dhclient: DHCPREQUEST of 192.168.0.16 on eth0 to 255.255.255.255 port 67 (xid=0x3e0183b9) Jun 6 10:39:08 lolbox dhclient: XMT: Solicit on eth0, interval 1050ms. Jun 6 10:39:08 lolbox dhclient: send_packet6: Cannot assign requested address Jun 6 10:39:08 lolbox dhclient: dhc6: send_packet6() sent -1 of 77 bytes Jun 6 10:39:09 lolbox dhclient: DHCPACK of 192.168.0.16 from 192.168.0.1 Jun 6 10:39:09 lolbox dhclient: bound to 192.168.0.16 -- renewal in 35498 seconds. Jun 6 10:39:09 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed preinit -> reboot Jun 6 10:39:09 lolbox NetworkManager[862]: <info> address 192.168.0.16 Jun 6 10:39:09 lolbox NetworkManager[862]: <info> prefix 24 (255.255.255.0) Jun 6 10:39:09 lolbox NetworkManager[862]: <info> gateway 192.168.0.1 Jun 6 10:39:09 lolbox NetworkManager[862]: <info> nameserver '83.255.245.11' Jun 6 10:39:09 lolbox NetworkManager[862]: <info> nameserver '193.150.193.150' Jun 6 10:39:09 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Configure Commit) scheduled... Jun 6 10:39:09 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) started... Jun 6 10:39:09 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:39:09 lolbox avahi-daemon[619]: New relevant interface eth0.IPv4 for mDNS. Jun 6 10:39:09 lolbox avahi-daemon[619]: Registering new address record for 192.168.0.16 on eth0.IPv4. Jun 6 10:39:09 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:09 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:39:09 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:39:10 lolbox dhclient: XMT: Solicit on eth0, interval 2180ms. Jun 6 10:39:10 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:10 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:39:10 lolbox NetworkManager[862]: <info> (eth0): device state change: ip-config -> secondaries (reason 'none') [70 90 0] Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. Jun 6 10:39:10 lolbox NetworkManager[862]: <info> (eth0): device state change: secondaries -> activated (reason 'none') [90 100 0] Jun 6 10:39:10 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTED_GLOBAL Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Policy set 'Wired connection 1' (eth0) as default for IPv4 routing and DNS. Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Writing DNS information to /sbin/resolvconf Jun 6 10:39:10 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:39:10 lolbox dnsmasq[1362]: using nameserver 127.0.0.1#53 Jun 6 10:39:10 lolbox dnsmasq[1362]: using nameserver 193.150.193.150#53 Jun 6 10:39:10 lolbox dnsmasq[1362]: using nameserver 83.255.245.11#53 Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Activation (eth0) successful, device activated. Jun 6 10:39:10 lolbox whoopsie[1133]: message repeated 2 times: [ offline] Jun 6 10:39:10 lolbox whoopsie[1133]: online Jun 6 10:39:10 lolbox ntpdate[13339]: Can't find host ntp.ubuntu.com: Name or service not known (-2) Jun 6 10:39:10 lolbox ntpdate[13339]: no servers can be used, exiting Jun 6 10:39:11 lolbox dnsmasq[1138]: reading /var/run/dnsmasq/resolv.conf Jun 6 10:39:11 lolbox dnsmasq[1138]: using nameserver 127.0.1.1#53 Jun 6 10:39:12 lolbox dhclient: XMT: Solicit on eth0, interval 4350ms. Jun 6 10:39:12 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:12 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:39:16 lolbox dhclient: XMT: Solicit on eth0, interval 8740ms. Jun 6 10:39:16 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:16 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:39:17 lolbox dnsmasq[1138]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:39:17 lolbox dnsmasq[1362]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:39:25 lolbox dhclient: XMT: Solicit on eth0, interval 17610ms. Jun 6 10:39:25 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:25 lolbox dhclient: IA_NA status code NoAddrsAvail.

    Read the article

  • Scrolling an HTML 5 page using JQuery

    - by nikolaosk
    In this post I will show you how to use JQuery to scroll through an HTML 5 page.I had to help a friend of mine to implement this functionality and I thought it would be a good idea to write a post.I will not use any JQuery scrollbar plugin,I will just use the very popular JQuery Library. Please download the library (minified version) from http://jquery.com/download.Please find here all my posts regarding JQuery.Also have a look at my posts regarding HTML 5.In order to be absolutely clear this is not (and could not be) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. Let me move on to the actual example.This is the sample HTML 5 page<!DOCTYPE html><html lang="en">  <head>    <title>Liverpool Legends</title>        <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >        <link rel="stylesheet" type="text/css" href="style.css">        <script type="text/javascript" src="jquery-1.8.2.min.js"> </script>     <script type="text/javascript" src="scroll.js">     </script>       </head>  <body>    <header>        <h1>Liverpool Legends</h1>    </header>        <div id="main">        <table>        <caption>Liverpool Players</caption>        <thead>            <tr>                <th>Name</th>                <th>Photo</th>                <th>Position</th>                <th>Age</th>                <th>Scroll</th>            </tr>        </thead>        <tfoot class="footnote">            <tr>                <td colspan="4">We will add more photos soon</td>            </tr>        </tfoot>    <tbody>        <tr class="maintop">        <td>Alan Hansen</td>            <td>            <figure>            <img src="images\Alan-hansen-large.jpg" alt="Alan Hansen">            <figcaption>The best Liverpool Defender <a href="http://en.wikipedia.org/wiki/Alan_Hansen">Alan Hansen</a></figcaption>            </figure>            </td>            <td>Defender</td>            <td>57</td>            <td class="top">Middle</td>        </tr>        <tr>        <td>Graeme Souness</td>            <td>            <figure>            <img src="images\graeme-souness-large.jpg" alt="Graeme Souness">            <figcaption>Souness was the captain of the successful Liverpool team of the early 1980s <a href="http://en.wikipedia.org/wiki/Graeme_Souness">Graeme Souness</a></figcaption>            </figure>            </td>            <td>MidFielder</td>            <td>59</td>        </tr>        <tr>        <td>Ian Rush</td>            <td>            <figure>            <img src="images\ian-rush-large.jpg" alt="Ian Rush">            <figcaption>The deadliest Liverpool Striker <a href="http://it.wikipedia.org/wiki/Ian_Rush">Ian Rush</a></figcaption>            </figure>            </td>            <td>Striker</td>            <td>51</td>        </tr>        <tr class="mainmiddle">        <td>John Barnes</td>            <td>            <figure>            <img src="images\john-barnes-large.jpg" alt="John Barnes">            <figcaption>The best Liverpool Defender <a href="http://en.wikipedia.org/wiki/John_Barnes_(footballer)">John Barnes</a></figcaption>            </figure>            </td>            <td>MidFielder</td>            <td>49</td>            <td class="middle">Bottom</td>        </tr>                <tr>        <td>Kenny Dalglish</td>            <td>            <figure>            <img src="images\kenny-dalglish-large.jpg" alt="Kenny Dalglish">            <figcaption>King Kenny <a href="http://en.wikipedia.org/wiki/Kenny_Dalglish">Kenny Dalglish</a></figcaption>            </figure>            </td>            <td>Midfielder</td>            <td>61</td>        </tr>        <tr>            <td>Michael Owen</td>            <td>            <figure>            <img src="images\michael-owen-large.jpg" alt="Michael Owen">            <figcaption>Michael was Liverpool's top goal scorer from 1997–2004 <a href="http://www.michaelowen.com/">Michael Owen</a></figcaption>            </figure>            </td>            <td>Striker</td>            <td>33</td>        </tr>        <tr>            <td>Robbie Fowler</td>            <td>            <figure>            <img src="images\robbie-fowler-large.jpg" alt="Robbie Fowler">            <figcaption>Fowler scored 183 goals in total for Liverpool <a href="http://en.wikipedia.org/wiki/Robbie_Fowler">Robbie Fowler</a></figcaption>            </figure>            </td>            <td>Striker</td>            <td>38</td>        </tr>        <tr class="mainbottom">            <td>Steven Gerrard</td>            <td>            <figure>            <img src="images\steven-gerrard-large.jpg" alt="Steven Gerrard">            <figcaption>Liverpool's captain <a href="http://en.wikipedia.org/wiki/Steven_Gerrard">Steven Gerrard</a></figcaption>            </figure>            </td>            <td>Midfielder</td>            <td>32</td>            <td class="bottom">Top</td>        </tr>    </tbody></table>          </div>            <footer>        <p>All Rights Reserved</p>      </footer>     </body>  </html>  The markup is very easy to follow and understand. You do not have to type all the code,simply copy and paste it.For those that you are not familiar with HTML 5, please take a closer look at the new tags/elements introduced with HTML 5.When I view the HTML 5 page with Firefox I see the following result. I have also an external stylesheet (style.css). body{background-color:#efefef;}h1{font-size:2.3em;}table { border-collapse: collapse;font-family: Futura, Arial, sans-serif; }caption { font-size: 1.2em; margin: 1em auto; }th, td {padding: .65em; }th, thead { background: #000; color: #fff; border: 1px solid #000; }tr:nth-child(odd) { background: #ccc; }tr:nth-child(even) { background: #404040; }td { border-right: 1px solid #777; }table { border: 1px solid #777;  }.top, .middle, .bottom {    cursor: pointer;    font-size: 22px;    font-weight: bold;    text-align: center;}.footnote{text-align:center;font-family:Tahoma;color:#EB7515;}a{color:#22577a;text-decoration:none;}     a:hover {color:#125949; text-decoration:none;}  footer{background-color:#505050;width:1150px;}These are just simple CSS Rules that style the various HTML 5 tags,classes. The jQuery code that makes it all possible resides inside the scroll.js file.Make sure you type everything correctly.$(document).ready(function() {                 $('.top').click(function(){                     $('html, body').animate({                         scrollTop: $(".mainmiddle").offset().top                     },4000 );                  });                 $('.middle').click(function(){                     $('html, body').animate({                         scrollTop: $(".mainbottom").offset().top                     },4000);                  });                     $('.bottom').click(function(){                     $('html, body').animate({                         scrollTop: $(".maintop").offset().top                     },4000);                  }); });  Let me explain what I am doing here.When I click on the Middle word (  $('.top').click(function(){ ) this relates to the top class that is clicked.Then we declare the elements that we want to participate in the scrolling. In this case is html,body ( $('html, body').animate).These elements will be part of the vertical scrolling.In the next line of code we simply move (navigate) to the element (class mainmiddle that is attached to a tr element.)      scrollTop: $(".mainmiddle").offset().top  Make sure you type all the code correctly and try it for yourself. I have tested this solution will all 4-5 major browsers.Hope it helps!!!

    Read the article

  • Bogus InvalidOperationException (in a DataServiceRequestException)

    - by Andrei Rinea
    I am having a hard time with ADO.NET Data Services (formerly code-named Astoria) as it gives me a bogus exception when I try to insert a new entity from the silverlight client and trying in a clean project (the same code) doesn't. In both cases, however, data is correctly inserted into the database. Using Fiddler (an HTTP debugger I could see that there is no problem in the HTTP communication as I will show later in this question. The code : var ctx = new MyProject123Entities(new Uri("http://andreiri/MyProject.Data/Data.svc")); var i = new Zone() { Data = DateTime.Now, IdElement = 1 }; ctx.AddToZone(i); i.StareZone = new StareZone() { IdStareZone = 1 }; ctx.AttachTo("StareZone", i.StareZone); ctx.SetLink(i, "StareZone", i.StareZone); i.TipZone = new TipZone() { IdTipZone = 1 }; ctx.AttachTo("TipZone", i.TipZone); ctx.SetLink(i, "TipZone", i.TipZone); i.User = new User() { IdUser = 2 }; ctx.AttachTo("User", i.User); ctx.SetLink(i, "User", i.User); ctx.BeginSaveChanges(r =] ctx.EndSaveChanges(r), null); when run the last line (ctx.EndSaveChanges(r)) will throw the following exception : System.Data.Services.Client.DataServiceRequestException was unhandled by user code Message="An error occurred while processing this request." StackTrace: at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.HandleBatchResponse() at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.EndRequest() at System.Data.Services.Client.DataServiceContext.EndSaveChanges(IAsyncResult asyncResult) at MyProject.MainPage.[]c__DisplayClassd6.[]c__DisplayClassd8.[dashboard_PostZoneCurent]b__d5(IAsyncResult r) at System.Data.Services.Client.BaseAsyncResult.HandleCompleted() at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.HandleCompleted(PerRequest pereq) at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.AsyncEndRead(IAsyncResult asyncResult) at System.IO.Stream.BeginRead(Byte[] buffer, Int32 offset, Int32 count, AsyncCallback callback, Object state) at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.AsyncEndGetResponse(IAsyncResult asyncResult) InnerException: System.InvalidOperationException Message="The context is already tracking a different entity with the same resource Uri." StackTrace: at System.Data.Services.Client.DataServiceContext.AttachTo(Uri identity, Uri editLink, String etag, Object entity, Boolean fail) at System.Data.Services.Client.MaterializeAtom.MoveNext() at System.Data.Services.Client.DataServiceContext.HandleResponsePost(ResourceBox entry, MaterializeAtom materializer, Uri editLink, String etag) at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.[HandleBatchResponse]d__1d.MoveNext() InnerException: (there is no further information regarding the exception although the ADo.NET Data Service is configured to return detailed informations) However the row is inserted correctly and completely in the database. Using fiddler I can see that the request : <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <category scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="MyProject123Model.Zone" /> <title /> <updated>2009-09-11T13:36:46.917157Z</updated> <author> <name /> </author> <id /> <link href="http://andreiri/MyProject.Data/Data.svc/StareZone(1)" rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/StareZone" type="application/atom+xml;type=entry" /> <link href="http://andreiri/MyProject.Data/Data.svc/TipZone(4)" rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/TipZone" type="application/atom+xml;type=entry" /> <link href="http://andreiri/MyProject.Data/Data.svc/User(4)" rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/User" type="application/atom+xml;type=entry" /> <content type="application/xml"> <m:properties> <d:Data m:type="Edm.DateTime">2009-09-11T16:36:40.588951+03:00</d:Data> <d:Detalii>aslkdfjasldkfj</d:Detalii> <d:IdElement m:type="Edm.Int32">1</d:IdElement> <d:IdZone m:type="Edm.Int32">0</d:IdZone> <d:X_Post m:type="Edm.Decimal">587647.4705</d:X_Post> <d:X_Repost m:type="Edm.Decimal" m:null="true" /> <d:Y_Post m:type="Edm.Decimal">325783.077599999</d:Y_Post> <d:Y_Repost m:type="Edm.Decimal" m:null="true" /> </m:properties> </content> </entry> is well accepted and a successful response is returned : HTTP/1.1 201 Created Date: Fri, 11 Sep 2009 13:36:47 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 DataServiceVersion: 1.0; Location: http://andreiri/MyProject.Data/Data.svc/Zone(75) Cache-Control: no-cache Content-Type: application/atom+xml;charset=utf-8 Content-Length: 2213 <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xml:base="http://andreiri/MyProject.Data/Data.svc/" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <id>http://andreiri/MyProject.Data/Data.svc/Zone(75)</id> <title type="text"></title> <updated>2009-09-11T13:36:47Z</updated> <author> <name /> </author> <link rel="edit" title="Zone" href="Zone(75)" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/CenterZone" type="application/atom+xml;type=feed" title="CenterZone" href="Zone(75)/CenterZone" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/ZoneMobil" type="application/atom+xml;type=feed" title="ZoneMobil" href="Zone(75)/ZoneMobil" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/StareZone" type="application/atom+xml;type=entry" title="StareZone" href="Zone(75)/StareZone" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/TipZone" type="application/atom+xml;type=entry" title="TipZone" href="Zone(75)/TipZone" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/User" type="application/atom+xml;type=entry" title="User" href="Zone(75)/User" /> <category term="MyProject123Model.Zone" scheme="http://schemas.microsoft.com ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:IdZone m:type="Edm.Int32">75</d:IdZone> <d:X_Post m:type="Edm.Decimal">587647.4705</d:X_Post> <d:Y_Post m:type="Edm.Decimal">325783.077599999</d:Y_Post> <d:X_Repost m:type="Edm.Decimal" m:null="true" /> <d:Y_Repost m:type="Edm.Decimal" m:null="true" /> <d:Data m:type="Edm.DateTime">2009-09-11T16:36:40.588951+03:00</d:Data> <d:Detalii>aslkdfjasldkfj</d:Detalii> <d:IdElement m:type="Edm.Int32">1</d:IdElement> </m:properties> </content> </entry> Why do I get an exception? And, using this in a clean project does not throw the exception..

    Read the article

  • Snort's problems in generating alert from Darpa 1998 intrusion detection dataset.

    - by manofseven2
    Hi. I’m working on DARPA 1998 intrusion detection dataset. When I run snort on this dataset (outside.tcpdump file), snort don’t generate complete list of alerts. It means snort start from last few hours of tcpdump file and generate alerts about this section of file and all of packets in first hours are ignored. Another problem in generatin alert is in time stamp of generated alerts. This means when I run snort on a specific day of dataset, snort insert incorrect time stamp for that alert. The configuration and command line statement and other information about my research are: Snort version: 2.8.6 Operating system: windows XP Rule version: snortrules-snapshot-2860_s.tar.gz -———————————————————————— Command line: snort_2.8.6 c D:\programs\Snort_2.8.6\snort\etc\snort.conf -r d:\users\amir\docs\darpa\training_data\week_3\monday\outside.tcpdump -l D:\users\amir\current-task\research\thesis\snort\890230 -————————————————————————— Snort.config Hi. I'm working on DARPA 1998 intrusion detection dataset. When I run snort on this dataset (outside.tcpdump file), snort don't generate complete list of alerts. It means snort start from last few hours of tcpdump file and generate alerts about this section of file and all of packets in first hours are ignored. Another problem in generatin alert is in time stamp of generated alerts. This means when I run snort on a specific day of dataset, snort insert incorrect time stamp for that alert. The configuration and command line statement and other information about my research are: Snort version: 2.8.6 Operating system: windows XP Rule version: snortrules-snapshot-2860_s.tar.gz Command line: snort_2.8.6 -c D:\programs\Snort_2.8.6\snort\etc\snort.conf -r d:\users\amir\docs\darpa\training_data\week_3\monday\outside.tcpdump -l D:\users\amir\current-task\research\thesis\snort\890230 Snort.config # Setup the network addresses you are protecting var HOME_NET any # Set up the external network addresses. Leave as "any" in most situations var EXTERNAL_NET any # List of DNS servers on your network var DNS_SERVERS $HOME_NET # List of SMTP servers on your network var SMTP_SERVERS $HOME_NET # List of web servers on your network var HTTP_SERVERS $HOME_NET # List of sql servers on your network var SQL_SERVERS $HOME_NET # List of telnet servers on your network var TELNET_SERVERS $HOME_NET # List of ssh servers on your network var SSH_SERVERS $HOME_NET # List of ports you run web servers on portvar HTTP_PORTS [80,1220,2301,3128,7777,7779,8000,8008,8028,8080,8180,8888,9999] # List of ports you want to look for SHELLCODE on. portvar SHELLCODE_PORTS !80 # List of ports you might see oracle attacks on portvar ORACLE_PORTS 1024: # List of ports you want to look for SSH connections on: portvar SSH_PORTS 22 # other variables, these should not be modified var AIM_SERVERS [64.12.24.0/23,64.12.28.0/23,64.12.161.0/24,64.12.163.0/24,64.12.200.0/24,205.188.3.0/24,205.188.5.0/24,205.188.7.0/24,205.188.9.0/24,205.188.153.0/24,205.188.179.0/24,205.188.248.0/24] var RULE_PATH ../rules var SO_RULE_PATH ../so_rules var PREPROC_RULE_PATH ../preproc_rules # Stop generic decode events: config disable_decode_alerts # Stop Alerts on experimental TCP options config disable_tcpopt_experimental_alerts # Stop Alerts on obsolete TCP options config disable_tcpopt_obsolete_alerts # Stop Alerts on T/TCP alerts config disable_tcpopt_ttcp_alerts # Stop Alerts on all other TCPOption type events: config disable_tcpopt_alerts # Stop Alerts on invalid ip options config disable_ipopt_alerts # Alert if value in length field (IP, TCP, UDP) is greater th elength of the packet # config enable_decode_oversized_alerts # Same as above, but drop packet if in Inline mode (requires enable_decode_oversized_alerts) # config enable_decode_oversized_drops # Configure IP / TCP checksum mode config checksum_mode: all config pcre_match_limit: 1500 config pcre_match_limit_recursion: 1500 # Configure the detection engine See the Snort Manual, Configuring Snort - Includes - Config config detection: search-method ac-split search-optimize max-pattern-len 20 # Configure the event queue. For more information, see README.event_queue config event_queue: max_queue 8 log 3 order_events content_length dynamicpreprocessor directory D:\programs\Snort_2.8.6\snort\lib\snort_dynamicpreprocessor dynamicengine D:\programs\Snort_2.8.6\snort\lib\snort_dynamicengine\sf_engine.dll # path to dynamic rules libraries #dynamicdetection directory /usr/local/lib/snort_dynamicrules preprocessor frag3_global: max_frags 65536 preprocessor frag3_engine: policy windows detect_anomalies overlap_limit 10 min_fragment_length 100 timeout 180 preprocessor stream5_global: max_tcp 8192, track_tcp yes, track_udp yes, track_icmp no preprocessor stream5_tcp: policy windows, detect_anomalies, require_3whs 180, \ overlap_limit 10, small_segments 3 bytes 150, timeout 180, \ ports client 21 22 23 25 42 53 79 109 110 111 113 119 135 136 137 139 143 \ 161 445 513 514 587 593 691 1433 1521 2100 3306 6665 6666 6667 6668 6669 \ 7000 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779, \ ports both 80 443 465 563 636 989 992 993 994 995 1220 2301 3128 6907 7702 7777 7779 7801 7900 7901 7902 7903 7904 7905 \ 7906 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 8000 8008 8028 8080 8180 8888 9999 preprocessor stream5_udp: timeout 180 preprocessor http_inspect: global iis_unicode_map unicode.map 1252 compress_depth 20480 decompress_depth 20480 preprocessor http_inspect_server: server default \ chunk_length 500000 \ server_flow_depth 0 \ client_flow_depth 0 \ post_depth 65495 \ oversize_dir_length 500 \ max_header_length 750 \ max_headers 100 \ ports { 80 1220 2301 3128 7777 7779 8000 8008 8028 8080 8180 8888 9999 } \ non_rfc_char { 0x00 0x01 0x02 0x03 0x04 0x05 0x06 0x07 } \ enable_cookie \ extended_response_inspection \ inspect_gzip \ apache_whitespace no \ ascii no \ bare_byte no \ directory no \ double_decode no \ iis_backslash no \ iis_delimiter no \ iis_unicode no \ multi_slash no \ non_strict \ u_encode yes \ webroot no preprocessor rpc_decode: 111 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779 no_alert_multiple_requests no_alert_large_fragments no_alert_incomplete preprocessor bo preprocessor ftp_telnet: global inspection_type stateful encrypted_traffic no preprocessor ftp_telnet_protocol: telnet \ ayt_attack_thresh 20 \ normalize ports { 23 } \ detect_anomalies preprocessor ftp_telnet_protocol: ftp server default \ def_max_param_len 100 \ ports { 21 2100 3535 } \ telnet_cmds yes \ ignore_telnet_erase_cmds yes \ ftp_cmds { ABOR ACCT ADAT ALLO APPE AUTH CCC CDUP } \ ftp_cmds { CEL CLNT CMD CONF CWD DELE ENC EPRT } \ ftp_cmds { EPSV ESTA ESTP FEAT HELP LANG LIST LPRT } \ ftp_cmds { LPSV MACB MAIL MDTM MIC MKD MLSD MLST } \ ftp_cmds { MODE NLST NOOP OPTS PASS PASV PBSZ PORT } \ ftp_cmds { PROT PWD QUIT REIN REST RETR RMD RNFR } \ ftp_cmds { RNTO SDUP SITE SIZE SMNT STAT STOR STOU } \ ftp_cmds { STRU SYST TEST TYPE USER XCUP XCRC XCWD } \ ftp_cmds { XMAS XMD5 XMKD XPWD XRCP XRMD XRSQ XSEM } \ ftp_cmds { XSEN XSHA1 XSHA256 } \ alt_max_param_len 0 { ABOR CCC CDUP ESTA FEAT LPSV NOOP PASV PWD QUIT REIN STOU SYST XCUP XPWD } \ alt_max_param_len 200 { ALLO APPE CMD HELP NLST RETR RNFR STOR STOU XMKD } \ alt_max_param_len 256 { CWD RNTO } \ alt_max_param_len 400 { PORT } \ alt_max_param_len 512 { SIZE } \ chk_str_fmt { ACCT ADAT ALLO APPE AUTH CEL CLNT CMD } \ chk_str_fmt { CONF CWD DELE ENC EPRT EPSV ESTP HELP } \ chk_str_fmt { LANG LIST LPRT MACB MAIL MDTM MIC MKD } \ chk_str_fmt { MLSD MLST MODE NLST OPTS PASS PBSZ PORT } \ chk_str_fmt { PROT REST RETR RMD RNFR RNTO SDUP SITE } \ chk_str_fmt { SIZE SMNT STAT STOR STRU TEST TYPE USER } \ chk_str_fmt { XCRC XCWD XMAS XMD5 XMKD XRCP XRMD XRSQ } \ chk_str_fmt { XSEM XSEN XSHA1 XSHA256 } \ cmd_validity ALLO \ cmd_validity EPSV \ cmd_validity MACB \ cmd_validity MDTM \ cmd_validity MODE \ cmd_validity PORT \ cmd_validity PROT \ cmd_validity STRU \ cmd_validity TYPE preprocessor ftp_telnet_protocol: ftp client default \ max_resp_len 256 \ bounce yes \ ignore_telnet_erase_cmds yes \ telnet_cmds yes preprocessor smtp: ports { 25 465 587 691 } \ inspection_type stateful \ normalize cmds \ normalize_cmds { MAIL RCPT HELP HELO ETRN EHLO EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET SEND SAML SOML AUTH TURN DATA QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ max_command_line_len 512 \ max_header_line_len 1000 \ max_response_line_len 512 \ alt_max_command_line_len 260 { MAIL } \ alt_max_command_line_len 300 { RCPT } \ alt_max_command_line_len 500 { HELP HELO ETRN EHLO } \ alt_max_command_line_len 255 { EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET } \ alt_max_command_line_len 246 { SEND SAML SOML AUTH TURN ETRN DATA RSET QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ valid_cmds { MAIL RCPT HELP HELO ETRN EHLO EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET SEND SAML SOML AUTH TURN DATA QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ xlink2state { enabled } preprocessor ssh: server_ports { 22 } \ autodetect \ max_client_bytes 19600 \ max_encrypted_packets 20 \ max_server_version_len 100 \ enable_respoverflow enable_ssh1crc32 \ enable_srvoverflow enable_protomismatch preprocessor dcerpc2: memcap 102400, events [co ] preprocessor dcerpc2_server: default, policy WinXP, \ detect [smb [139,445], tcp 135, udp 135, rpc-over-http-server 593], \ autodetect [tcp 1025:, udp 1025:, rpc-over-http-server 1025:], \ smb_max_chain 3 preprocessor dns: ports { 53 } enable_rdata_overflow preprocessor ssl: ports { 443 465 563 636 989 992 993 994 995 7801 7702 7900 7901 7902 7903 7904 7905 7906 6907 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 }, trustservers, noinspect_encrypted # SDF sensitive data preprocessor. For more information see README.sensitive_data preprocessor sensitive_data: alert_threshold 25 output alert_full: alert.log output database: log, mysql, user=root password=123456 dbname=snort host=localhost include classification.config include reference.config include $RULE_PATH/local.rules include $RULE_PATH/attack-responses.rules include $RULE_PATH/backdoor.rules include $RULE_PATH/bad-traffic.rules include $RULE_PATH/chat.rules include $RULE_PATH/content-replace.rules include $RULE_PATH/ddos.rules include $RULE_PATH/dns.rules include $RULE_PATH/dos.rules include $RULE_PATH/exploit.rules include $RULE_PATH/finger.rules include $RULE_PATH/ftp.rules include $RULE_PATH/icmp.rules include $RULE_PATH/icmp-info.rules include $RULE_PATH/imap.rules include $RULE_PATH/info.rules include $RULE_PATH/misc.rules include $RULE_PATH/multimedia.rules include $RULE_PATH/mysql.rules include $RULE_PATH/netbios.rules include $RULE_PATH/nntp.rules include $RULE_PATH/oracle.rules include $RULE_PATH/other-ids.rules include $RULE_PATH/p2p.rules include $RULE_PATH/policy.rules include $RULE_PATH/pop2.rules include $RULE_PATH/pop3.rules include $RULE_PATH/rpc.rules include $RULE_PATH/rservices.rules include $RULE_PATH/scada.rules include $RULE_PATH/scan.rules include $RULE_PATH/shellcode.rules include $RULE_PATH/smtp.rules include $RULE_PATH/snmp.rules include $RULE_PATH/specific-threats.rules include $RULE_PATH/spyware-put.rules include $RULE_PATH/sql.rules include $RULE_PATH/telnet.rules include $RULE_PATH/tftp.rules include $RULE_PATH/virus.rules include $RULE_PATH/voip.rules include $RULE_PATH/web-activex.rules include $RULE_PATH/web-attacks.rules include $RULE_PATH/web-cgi.rules include $RULE_PATH/web-client.rules include $RULE_PATH/web-coldfusion.rules include $RULE_PATH/web-frontpage.rules include $RULE_PATH/web-iis.rules include $RULE_PATH/web-misc.rules include $RULE_PATH/web-php.rules include $RULE_PATH/x11.rules include threshold.conf -————————————————————————————- Can anyone help me to solve this problem? Thanks.

    Read the article

  • WCF timeout exception detailed investigation

    - by Jason Kealey
    We have an application that has a WCF service (*.svc) running on IIS7 and various clients querying the service. The server is running Win 2008 Server. The clients are running either Windows 2008 Server or Windows 2003 server. I am getting the following exception, which I have seen can in fact be related to a large number of potential WCF issues. System.TimeoutException: The request channel timed out while waiting for a reply after 00:00:59.9320000. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. ---> System.TimeoutException: The HTTP request to 'http://www.domain.com/WebServices/myservice.svc/gzip' has exceeded the allotted timeout of 00:01:00. The time allotted to this operation may have been a portion of a longer timeout. I have increased the timeout to 30min and the error still occurred. This tells me that something else is at play, because the quantity of data could never take 30min to upload or download. The error comes and goes. At the moment, it is more frequent. It does not seem to matter if I have 3 clients running simultaneously or 100, it still occurs once in a while. Most of the time, there are no timeouts but I still get a few per hour. The error comes from any of the methods that are invoked. One of these methods does not have parameters and returns a bit of data. Another takes in lots of data as a parameter but executes asynchronously. The errors always originate from the client and never reference any code on the server in the stack trace. It always ends with: at System.Net.HttpWebRequest.GetResponse() at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) On the server: I've tried (and currently have) the following binding settings: maxBufferSize="2147483647" maxReceivedMessageSize="2147483647" maxBufferPoolSize="2147483647" It does not seem to have an impact. I've tried (and currently have) the following throttling settings: <serviceThrottling maxConcurrentCalls="1500" maxConcurrentInstances="1500" maxConcurrentSessions="1500"/> It does not seem to have an impact. I currently have the following settings for the WCF service. [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Single)] I ran with ConcurrencyMode.Multiple for a while, and the error still occurred. I've tried restarting IIS, restarting my underlying SQL Server, restarting the machine. All of these don't seem to have an impact. I've tried disabling the Windows firewall. It does not seem to have an impact. On the client, I have these settings: maxReceivedMessageSize="2147483647" <system.net> <connectionManagement> <add address="*" maxconnection="16"/> </connectionManagement> </system.net> My client closes its connections: var client = new MyClient(); try { return client.GetConfigurationOptions(); } finally { client.Close(); } I have changed the registry settings to allow more outgoing connections: MaxConnectionsPerServer=24, MaxConnectionsPer1_0Server=32. I have now just recently tried SvcTraceViewer.exe. I managed to catch one exception on the client end. I see that its duration is 1 minute. Looking at the server side trace, I can see that the server is not aware of this exception. The maximum duration I can see is 10 seconds. I have looked at active database connections using exec sp_who on the server. I only have a few (2-3). I have looked at TCP connections from one client using TCPview. It usually is around 2-3 and I have seen up to 5 or 6. Simply put, I am stumped. I have tried everything I could find, and must be missing something very simple that a WCF expert would be able to see. It is my gut feeling that something is blocking my clients at the low-level (TCP), before the server actually receives the message and/or that something is queuing the messages at the server level and never letting them process. If you have any performance counters I should look at, please let me know. (please indicate what values are bad, as some of these counters are hard to decypher). Also, how could I log the WCF message size? Finally, are there any tools our there that would allow me to test how many connections I can establish between my client and server (independently from my application) Thanks for your time! Extra information added June 20th: My WCF application does something similar to the following. while (true) { Step1GetConfigurationSettingsFromServerViaWCF(); // can change between calls Step2GetWorkUnitFromServerViaWCF(); DoWorkLocally(); // takes 5-15minutes. Step3SendBackResultsToServerViaWCF(); } Using WireShark, I did see that when the error occurs, I have a five TCP retransmissions followed by a TCP reset later on. My guess is the RST is coming from WCF killing the connection. The exception report I get is from Step3 timing out. I discovered this by looking at the tcp stream "tcp.stream eq 192". I then expanded my filter to "tcp.stream eq 192 and http and http.request.method eq POST" and saw 6 POSTs during this stream. This seemed odd, so I checked with another stream such as tcp.stream eq 100. I had three POSTs, which seems a bit more normal because I am doing three calls. However, I do close my connection after every WCF call, so I would have expected one call per stream (but I don't know much about TCP). Investigating a bit more, I dumped the http packet load to disk to look at what these six calls where. 1) Step3 2) Step1 3) Step2 4) Step3 - corrupted 5) Step1 6) Step2 My guess is two concurrent clients are using the same connection, that is why I saw duplicates. However, I still have a few more issues that I can't comprehend: a) Why is the packet corrupted? Random network fluke - maybe? The load is gzipped using this sample code: http://msdn.microsoft.com/en-us/library/ms751458.aspx - Could the code be buggy once in a while when used concurrently? I should test without the gzip library. b) Why would I see step 1 & step 2 running AFTER the corrupted operation timed out? It seems to me as if these operations should not have occurred. Maybe I am not looking at the right stream because my understanding of TCP is flawed. I have other streams that occur at the same time. I should investigate other streams - a quick glance at streams 190-194 show that the Step3 POST have proper payload data (not corrupted). Pushing me to look at the gzip library again.

    Read the article

  • Component returned failure code: 0x80600011 [nsIXSLTProcessorObsolete.transformDocument]

    - by Sean Ochoa
    So, I'm using the XSLT plugin for JQuery, and here's my code: function AddPlotcardEventHandlers(){ // some code } function reportError(exception){ alert(exception.constructor.name + " Exception: " + ((exception.name) ? exception.name : "[unknown name]") + " - " + exception.message); } function GetPlotcards(){ $("#content").xslt("../xml/plotcards.xml","../xslt/plotcards.xsl", AddPlotcardEventHandlers,reportError); } Here's the modified jquery plugin. I say that its modified because I've added callbacks for success and error handling. /* * jquery.xslt.js * * Copyright (c) 2005-2008 Johann Burkard (<mailto:[email protected]>) * <http://eaio.com> * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN * NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE * USE OR OTHER DEALINGS IN THE SOFTWARE. * */ /** * jQuery client-side XSLT plugins. * * @author <a href="mailto:[email protected]">Johann Burkard</a> * @version $Id: jquery.xslt.js,v 1.10 2008/08/29 21:34:24 Johann Exp $ */ (function($) { $.fn.xslt = function() { return this; } var str = /^\s*</; if (document.recalc) { // IE 5+ $.fn.xslt = function(xml, xslt, onSuccess, onError) { try{ var target = $(this); var change = function() { try{ var c = 'complete'; if (xm.readyState == c && xs.readyState == c) { window.setTimeout(function() { target.html(xm.transformNode(xs.XMLDocument)); if (onSuccess) onSuccess(); }, 50); } }catch(exception){ if (onError) onError(exception); } }; var xm = document.createElement('xml'); xm.onreadystatechange = change; xm[str.test(xml) ? "innerHTML" : "src"] = xml; var xs = document.createElement('xml'); xs.onreadystatechange = change; xs[str.test(xslt) ? "innerHTML" : "src"] = xslt; $('body').append(xm).append(xs); return this; }catch(exception){ if (onError) onError(exception); } }; } else if (window.DOMParser != undefined && window.XMLHttpRequest != undefined && window.XSLTProcessor != undefined) { // Mozilla 0.9.4+, Opera 9+ var processor = new XSLTProcessor(); var support = false; if ($.isFunction(processor.transformDocument)) { support = window.XMLSerializer != undefined; } else { support = true; } if (support) { $.fn.xslt = function(xml, xslt, onSuccess, onError) { try{ var target = $(this); var transformed = false; var xm = { readyState: 4 }; var xs = { readyState: 4 }; var change = function() { try{ if (xm.readyState == 4 && xs.readyState == 4 && !transformed) { var processor = new XSLTProcessor(); if ($.isFunction(processor.transformDocument)) { // obsolete Mozilla interface resultDoc = document.implementation.createDocument("", "", null); processor.transformDocument(xm.responseXML, xs.responseXML, resultDoc, null); target.html(new XMLSerializer().serializeToString(resultDoc)); } else { processor.importStylesheet(xs.responseXML); resultDoc = processor.transformToFragment(xm.responseXML, document); target.empty().append(resultDoc); } transformed = true; if (onSuccess) onSuccess(); } }catch(exception){ if (onError) onError(exception); } }; if (str.test(xml)) { xm.responseXML = new DOMParser().parseFromString(xml, "text/xml"); } else { xm = $.ajax({ dataType: "xml", url: xml}); xm.onreadystatechange = change; } if (str.test(xslt)) { xs.responseXML = new DOMParser().parseFromString(xslt, "text/xml"); change(); } else { xs = $.ajax({ dataType: "xml", url: xslt}); xs.onreadystatechange = change; } }catch(exception){ if (onError) onError(exception); }finally{ return this; } }; } } })(jQuery); And, here's my error msg: Object Exception: [unknown name] - Component returned failure code: 0x80600011 [nsIXSLTProcessorObsolete.transformDocument] Here's the info on the browser that I'm using for testing (with firebug v1.5.4 add-on installed): Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Here's my XML: <?xml version="1.0" encoding="ISO-8859-1"?> <plotcardCollection sortby="order"> <plotcard order="2" id="1378"> <name><![CDATA[[placeholder for name of plotcard 1378]]]></name> <content><![CDATA[[placeholder for content of plotcard 1378]]]></content> <tagCollection> <tag id="3"><![CDATA[[placeholder for tag with id=3]]]></tag> <tag id="7"><![CDATA[[placeholder for tag with id=7]]]></tag> </tagCollection> </plotcard> <plotcard order="1" id="2156"> <name><![CDATA[[placeholder for name of plotcard 2156]]]></name> <content><![CDATA[[placeholder for content of plotcard 2156]]]></content> <tagCollection> <tag id="2"><![CDATA[[placeholder for tag with id=2]]]></tag> <tag id="9"><![CDATA[[placeholder for tag with id=9]]]></tag> </tagCollection> </plotcard> </plotcardCollection> Here's my XSLT: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/plotcardCollection"> <xsl:variable name="sortby" select="@sortby" /> <xsl:for-each select="plotcard"> <xsl:sort select="$sortby" data-type="number" order="ascending"/> <div> <!-- Start Plotcard --> <xsl:attribute name="class">Plotcard</xsl:attribute> <xsl:for-each select="@"> <xsl:value-of select="name()"/> <xsl:text>='</xsl:text> <xsl:if test="name() = 'id'"> <xsl:text>Plotcard-</xsl:text> </xsl:if> <xsl:value-of select="." /> <xsl:text>'</xsl:text> </xsl:for-each> <!-- Start Plotcard Name Section --> <div> <xsl:attribute name="class"> <xsl:text disable-output-escaping="yes">PlotcardName</xsl:text> </xsl:attribute> <xsl:value-of select="name/text()"/> </div> <!-- Start Plotcard Content Section --> <div> <xsl:attribute name="class"> <xsl:text disable-output-escaping="yes">PlotcardContent</xsl:text> </xsl:attribute> <xsl:value-of select="content/text()"/> </div> </div> </xsl:for-each> </xsl:template> </xsl:stylesheet> I'm really not sure what to do about this.... any thoughts?

    Read the article

  • WordPress contact form email as PDF

    - by lock
    I am using the below code for my WordPress site which is emailing all the form details as an HTML text but I need the details to be written into a PDF first and then have to email the PDF as an attachment. How can I achieve this? This is not a PHP code to use PHP's writePDF modules. So, any idea or any code to implement this? <div style="padding-left: 100px;"> [raw] [contact-form subject="Best Aussie Broker" to="[email protected]"] <div id="main34" style="border: 1px solid black; border-radius: 15px; width: 720px; padding: 15px;"> &nbsp; <h2><span style="color: #ff6600;">Express Application</span></h2> &nbsp; [contact-field label="First Name" type="name" required="true" /] [contact-field label="Last Name" type="text" /] [contact-field label="Email" type="email" required="true" /] [contact-field label="Purpose of Finance?" type="select" options="Home Loan,Refinance,Investment Loan,Debt Consolidation,Other" /] [contact-field label="Your deposit amount" type="text" /] [contact-field label="Amount you need to borrow?" type="text" /] [contact-field label="Brief description of the purpose for finance" type="textarea" required="true" /] <div><label></label> <input class="radio" type="radio" name="19" value="Single Application" onchange="showsingle();" /> <label class="radio">Single Application</label> <div class="clear-form"></div> <input class="radio" type="radio" name="19" value="Joint Application" onchange="showjoint();" /> <label class="radio">Joint Application</label> <div class="clear-form"></div> [contact-field label="Privacy Act" type="checkbox" required="true" /] I have read the Privacy Act 1988 (as Amended) and understand that by selecting the submit button I/we Authorize Best Aussie Broker to act on my/our behalf and manage personal information in relation to this application.<br> <a href="http://googleplex.com.au/pdf.pdf"><img src="http://googleplex.com.au/pdf.png" alt="" /> </a> </div> </div> <div id="single" style="display: none; width: 720px; border: 1px solid black; border-radius: 15px; padding: 15px; margin-top: 10px;"> <div style="padding-top: 10px; width: 720px; text-align: left;"> <h4><span style="color: #ff6600;">Last step then we will get all listed Australian vendors to fight it out for your best deal</span></h4> </div> <div> <label class="select" for="19-date-of-birth">Date of Birth</label> [contact-field label="Day" type="select" options="1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31" /] [contact-field label="Month" type="select" options="January,February,March,April,May,June,July,August,September,October,November,December" /] [contact-field label="Year" type="select" options="2000,1999,1998,1997,1996,1995,1994,1993,1992,1991,1990,1989,1988,1987,1986,1985,1984,1983,1982,1981,1980,1979,1978,197,1976,1975,1974,1973,1972,1971,1970,1969,1968,1967,1966,1965,1964,1963,1962,1961,1960,1959,1958,1957,1956,1955,1954,1953,1952,1951,1950,1949,1948,1947,1946,1945,1944,1943,1942,1941,1940,1939,1938,1937,1936,1935,1934,1933,1932,1931,1930,1929,1928,1927,1926,1925,1924,1923,1922,1921,1920, 1919,1918,1917,1916,1915,1914,1913,1912,1911,1910,1909" /] </div> [contact-field label="Address" type="text" /] [contact-field label="Suburb" type="text" /] [contact-field label="Postcode" type="text" /] <div> [contact-field label="State" type="select" options="VIC,NSW,QLD,SA,WA,TAS,NZ,Other" /] </div> [contact-field label="Best Contact" type="radio" options="Landline,Mobile" /] [contact-field label="Phone Number" type="text" /] [contact-field label="Marital Status" type="select" options="Married,Single,Other" /] [contact-field label="Residential Status" type="select" options="Renting, Home Owned, Home Mortgage, Board, Other" /] [contact-field label="Children/Dependents" type="select" options="0,1,2,3,4,5,6" /] <div></div> [contact-field label="Gross Yearly Income" type="text" /] [contact-field label="Current Employer" type="text" /] <div> <label class="select" for="19-year-of-empl">Time at this employer</label> [contact-field label="Year" type="select" options="0,1,2,3,4,5,6,7,8,9,10,More" /] [contact-field label="Month" type="select" options="0,1,2,3,4,5,6,7,8,9,10,11,12" /] </div> <div style="padding-right: 15px;"></div> </div> <div id="joint" style="display: none; width: 720px; border: 1px solid black; border-radius: 15px; padding: 15px; margin-top: 10px;"> <div style="padding-top: 10px; width: 720px; text-align: left;"> <h4><span style="color: #ff6600;">Last step then we will get all listed Australian vendors to fight it out for your best deal</span></h4> </div> <div style="float: left; width: 320px;"> <div> <label class="select" for="19-date-of-birth1">Date of Birth</label> [contact-field label="Day" type="select" options="1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31" /] [contact-field label="Month" type="select" options="January,February,March,April,May,June,July,August,September,October,November,December" /] [contact-field label="Year" type="select" options="2000,1999,1998,1997,1996,1995,1994,1993,1992,1991,1990,1989,1988,1987,1986,1985,1984,1983,1982,1981,1980,1979,1978,197,1976,1975,1974,1973,1972,1971,1970,1969,1968,1967,1966,1965,1964,1963,1962,1961,1960,1959,1958,1957,1956,1955,1954,1953,1952,1951,1950,1949,1948,1947,1946,1945,1944,1943,1942,1941,1940,1939,1938,1937,1936,1935,1934,1933,1932,1931,1930,1929,1928,1927,1926,1925,1924,1923,1922,1921,1920, 1919,1918,1917,1916,1915,1914,1913,1912,1911,1910,1909" /] </div> [contact-field label="Address" type="text" /] [contact-field label="Suburb" type="text" /] [contact-field label="Postcode" type="text" /] <div> [contact-field label="State" type="select" options="VIC,NSW,QLD,SA,WA,TAS,NZ,Other" /] </div> [contact-field label="Best Contact" type="radio" options="Landline,Mobile" /] [contact-field label="Phone Number" type="text" /] <div></div> <div></div> [contact-field label="Marital Status" type="select" options="Married,Single,Other" /] [contact-field label="Residential Status" type="select" options="Renting, Home Owned, Home Mortgage, Board, Other" /] [contact-field label="Children/Dependents" type="select" options="0,1,2,3,4,5,6" /] <div></div> <div><label class="text" for="netincome">Net Income</label> <input id="netincome" type="text" name="netincome" /> <select id="netincome-dropdown" name="netincome-dropdown"> <option>Monthly</option> <option>Yearly</option> </select></div> [contact-field label="Current Employer" type="text" /] <div> <label class="select" for="19-year-of-empl2">Time at this employer</label> [contact-field label="Year" type="select" options="0,1,2,3,4,5,6,7,8,9,10,More" /] [contact-field label="Month" type="select" options="0,1,2,3,4,5,6,7,8,9,10,11,12" /] </div> </div> <div style="float: right; width: 320px; padding-right: 50px;"> <div> <label class="select" for="19-date-of-birth3">Date of Birth</label> [contact-field label="Day" type="select" options="1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31" /] [contact-field label="Month" type="select" options="January,February,March,April,May,June,July,August,September,October,November,December" /] [contact-field label="Year" type="select" options="2000,1999,1998,1997,1996,1995,1994,1993,1992,1991,1990,1989,1988,1987,1986,1985,1984,1983,1982,1981,1980,1979,1978,197,1976,1975,1974,1973,1972,1971,1970,1969,1968,1967,1966,1965,1964,1963,1962,1961,1960,1959,1958,1957,1956,1955,1954,1953,1952,1951,1950,1949,1948,1947,1946,1945,1944,1943,1942,1941,1940,1939,1938,1937,1936,1935,1934,1933,1932,1931,1930,1929,1928,1927,1926,1925,1924,1923,1922,1921,1920, 1919,1918,1917,1916,1915,1914,1913,1912,1911,1910,1909" /] </div> [contact-field label="Address" type="text" /] [contact-field label="Suburb" type="text" /] [contact-field label="Postcode" type="text" /] <div> [contact-field label="State" type="select" options="VIC,NSW,QLD,SA,WA,TAS,NZ,Other" /] </div> [contact-field label="Best Contact" type="radio" options="Landline,Mobile" /] [contact-field label="Phone Number" type="text" /] <div></div> <div></div> [contact-field label="Marital Status" type="select" options="Married,Single,Other" /] [contact-field label="Residential Status" type="select" options="Renting, Home Owned, Home Mortgage, Board, Other" /] [contact-field label="Children/Dependents" type="select" options="0,1,2,3,4,5,6" /] <div></div> <div><label class="text" for="netincome">Net Income</label> <input id="netincome" type="text" name="netincome" /> <select id="netincome-dropdown" name="netincome-dropdown"> <option>Monthly</option> <option>Yearly</option> </select></div> [contact-field label="Current Employer" type="text" /] <div> <label class="select" for="19-year-of-empl">Time at this employer</label> [contact-field label="Year" type="select" options="0,1,2,3,4,5,6,7,8,9,10,More" /] [contact-field label="Month" type="select" options="0,1,2,3,4,5,6,7,8,9,10,11,12" /] </div> </div> <div style="clear: both;"></div> <div></div> </div> &nbsp; [/contact-form][/raw] </div>

    Read the article

  • Azure Diagnostics wrt Custom Logs and honoring scheduledTransferPeriod

    - by kjsteuer
    I have implemented my own TraceListener similar to http://blogs.technet.com/b/meamcs/archive/2013/05/23/diagnostics-of-cloud-services-custom-trace-listener.aspx . One thing I noticed is that that logs show up immediately in My Azure Table Storage. I wonder if this is expected with Custom Trace Listeners or because I am in a development environment. My diagnosics.wadcfg <?xml version="1.0" encoding="utf-8"?> <DiagnosticMonitorConfiguration configurationChangePollInterval="PT1M""overallQuotaInMB="4096" xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"> <DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter="Information" /> <Directories scheduledTransferPeriod="PT1M"> <IISLogs container="wad-iis-logfiles" /> <CrashDumps container="wad-crash-dumps" /> </Directories> <Logs bufferQuotaInMB="0" scheduledTransferPeriod="PT30M" scheduledTransferLogLevelFilter="Information" /> </DiagnosticMonitorConfiguration> I have changed my approach a bit. Now I am defining in the web config of my webrole. I notice when I set autoflush to true in the webconfig, every thing works but scheduledTransferPeriod is not honored because the flush method pushes to the table storage. I would like to have scheduleTransferPeriod trigger the flush or trigger flush after a certain number of log entries like the buffer is full. Then I can also flush on server shutdown. Is there any method or event on the CustomTraceListener where I can listen to the scheduleTransferPeriod? <system.diagnostics> <!--http://msdn.microsoft.com/en-us/library/sk36c28t(v=vs.110).aspx By default autoflush is false. By default useGlobalLock is true. While we try to be threadsafe, we keep this default for now. Later if we would like to increase performance we can remove this. see http://msdn.microsoft.com/en-us/library/system.diagnostics.trace.usegloballock(v=vs.110).aspx --> <trace> <listeners> <add name="TableTraceListener" type="Pos.Services.Implementation.TableTraceListener, Pos.Services.Implementation" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> I have modified the custom trace listener to the following: namespace Pos.Services.Implementation { class TableTraceListener : TraceListener { #region Fields //connection string for azure storage readonly string _connectionString; //Custom sql storage table for logs. //TODO put in config readonly string _diagnosticsTable; [ThreadStatic] static StringBuilder _messageBuffer; readonly object _initializationSection = new object(); bool _isInitialized; CloudTableClient _tableStorage; readonly object _traceLogAccess = new object(); readonly List<LogEntry> _traceLog = new List<LogEntry>(); #endregion #region Constructors public TableTraceListener() : base("TableTraceListener") { _connectionString = RoleEnvironment.GetConfigurationSettingValue("DiagConnection"); _diagnosticsTable = RoleEnvironment.GetConfigurationSettingValue("DiagTableName"); } #endregion #region Methods /// <summary> /// Flushes the entries to the storage table /// </summary> public override void Flush() { if (!_isInitialized) { lock (_initializationSection) { if (!_isInitialized) { Initialize(); } } } var context = _tableStorage.GetTableServiceContext(); context.MergeOption = MergeOption.AppendOnly; lock (_traceLogAccess) { _traceLog.ForEach(entry => context.AddObject(_diagnosticsTable, entry)); _traceLog.Clear(); } if (context.Entities.Count > 0) { context.BeginSaveChangesWithRetries(SaveChangesOptions.None, (ar) => context.EndSaveChangesWithRetries(ar), null); } } /// <summary> /// Creates the storage table object. This class does not need to be locked because the caller is locked. /// </summary> private void Initialize() { var account = CloudStorageAccount.Parse(_connectionString); _tableStorage = account.CreateCloudTableClient(); _tableStorage.GetTableReference(_diagnosticsTable).CreateIfNotExists(); _isInitialized = true; } public override bool IsThreadSafe { get { return true; } } #region Trace and Write Methods /// <summary> /// Writes the message to a string buffer /// </summary> /// <param name="message">the Message</param> public override void Write(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.Append(message); } /// <summary> /// Writes the message with a line breaker to a string buffer /// </summary> /// <param name="message"></param> public override void WriteLine(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.AppendLine(message); } /// <summary> /// Appends the trace information and message /// </summary> /// <param name="eventCache">the Event Cache</param> /// <param name="source">the Source</param> /// <param name="eventType">the Event Type</param> /// <param name="id">the Id</param> /// <param name="message">the Message</param> public override void TraceEvent(TraceEventCache eventCache, string source, TraceEventType eventType, int id, string message) { base.TraceEvent(eventCache, source, eventType, id, message); AppendEntry(id, eventType, eventCache); } /// <summary> /// Adds the trace information to a collection of LogEntry objects /// </summary> /// <param name="id">the Id</param> /// <param name="eventType">the Event Type</param> /// <param name="eventCache">the EventCache</param> private void AppendEntry(int id, TraceEventType eventType, TraceEventCache eventCache) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); var message = _messageBuffer.ToString(); _messageBuffer.Length = 0; if (message.EndsWith(Environment.NewLine)) message = message.Substring(0, message.Length - Environment.NewLine.Length); if (message.Length == 0) return; var entry = new LogEntry() { PartitionKey = string.Format("{0:D10}", eventCache.Timestamp >> 30), RowKey = string.Format("{0:D19}", eventCache.Timestamp), EventTickCount = eventCache.Timestamp, Level = (int)eventType, EventId = id, Pid = eventCache.ProcessId, Tid = eventCache.ThreadId, Message = message }; lock (_traceLogAccess) _traceLog.Add(entry); } #endregion #endregion } }

    Read the article

  • Agile Development

    - by James Oloo Onyango
    Alot of literature has and is being written about agile developement and its surrounding philosophies. In my quest to find the best way to express the importance of agile methodologies, i have found Robert C. Martin's "A Satire Of Two Companies" to be both the most concise and thorough! Enjoy the read! Rufus Inc Project Kick Off Your name is Bob. The date is January 3, 2001, and your head still aches from the recent millennial revelry. You are sitting in a conference room with several managers and a group of your peers. You are a project team leader. Your boss is there, and he has brought along all of his team leaders. His boss called the meeting. "We have a new project to develop," says your boss's boss. Call him BB. The points in his hair are so long that they scrape the ceiling. Your boss's points are just starting to grow, but he eagerly awaits the day when he can leave Brylcream stains on the acoustic tiles. BB describes the essence of the new market they have identified and the product they want to develop to exploit this market. "We must have this new project up and working by fourth quarter October 1," BB demands. "Nothing is of higher priority, so we are cancelling your current project." The reaction in the room is stunned silence. Months of work are simply going to be thrown away. Slowly, a murmur of objection begins to circulate around the conference table.   His points give off an evil green glow as BB meets the eyes of everyone in the room. One by one, that insidious stare reduces each attendee to quivering lumps of protoplasm. It is clear that he will brook no discussion on this matter. Once silence has been restored, BB says, "We need to begin immediately. How long will it take you to do the analysis?" You raise your hand. Your boss tries to stop you, but his spitwad misses you and you are unaware of his efforts.   "Sir, we can't tell you how long the analysis will take until we have some requirements." "The requirements document won't be ready for 3 or 4 weeks," BB says, his points vibrating with frustration. "So, pretend that you have the requirements in front of you now. How long will you require for analysis?" No one breathes. Everyone looks around to see whether anyone has some idea. "If analysis goes beyond April 1, we have a problem. Can you finish the analysis by then?" Your boss visibly gathers his courage: "We'll find a way, sir!" His points grow 3 mm, and your headache increases by two Tylenol. "Good." BB smiles. "Now, how long will it take to do the design?" "Sir," you say. Your boss visibly pales. He is clearly worried that his 3 mms are at risk. "Without an analysis, it will not be possible to tell you how long design will take." BB's expression shifts beyond austere.   "PRETEND you have the analysis already!" he says, while fixing you with his vacant, beady little eyes. "How long will it take you to do the design?" Two Tylenol are not going to cut it. Your boss, in a desperate attempt to save his new growth, babbles: "Well, sir, with only six months left to complete the project, design had better take no longer than 3 months."   "I'm glad you agree, Smithers!" BB says, beaming. Your boss relaxes. He knows his points are secure. After a while, he starts lightly humming the Brylcream jingle. BB continues, "So, analysis will be complete by April 1, design will be complete by July 1, and that gives you 3 months to implement the project. This meeting is an example of how well our new consensus and empowerment policies are working. Now, get out there and start working. I'll expect to see TQM plans and QIT assignments on my desk by next week. Oh, and don't forget that your crossfunctional team meetings and reports will be needed for next month's quality audit." "Forget the Tylenol," you think to yourself as you return to your cubicle. "I need bourbon."   Visibly excited, your boss comes over to you and says, "Gosh, what a great meeting. I think we're really going to do some world shaking with this project." You nod in agreement, too disgusted to do anything else. "Oh," your boss continues, "I almost forgot." He hands you a 30-page document. "Remember that the SEI is coming to do an evaluation next week. This is the evaluation guide. You need to read through it, memorize it, and then shred it. It tells you how to answer any questions that the SEI auditors ask you. It also tells you what parts of the building you are allowed to take them to and what parts to avoid. We are determined to be a CMM level 3 organization by June!"   You and your peers start working on the analysis of the new project. This is difficult because you have no requirements. But from the 10-minute introduction given by BB on that fateful morning, you have some idea of what the product is supposed to do.   Corporate process demands that you begin by creating a use case document. You and your team begin enumerating use cases and drawing oval and stick diagrams. Philosophical debates break out among the team members. There is disagreement as to whether certain use cases should be connected with <<extends>> or <<includes>> relationships. Competing models are created, but nobody knows how to evaluate them. The debate continues, effectively paralyzing progress.   After a week, somebody finds the iceberg.com Web site, which recommends disposing entirely of <<extends>> and <<includes>> and replacing them with <<precedes>> and <<uses>>. The documents on this Web site, authored by Don Sengroiux, describes a method known as stalwart-analysis, which claims to be a step-by-step method for translating use cases into design diagrams. More competing use case models are created using this new scheme, but again, people can't agree on how to evaluate them. The thrashing continues. More and more, the use case meetings are driven by emotion rather than by reason. If it weren't for the fact that you don't have requirements, you'd be pretty upset by the lack of progress you are making. The requirements document arrives on February 15. And then again on February 20, 25, and every week thereafter. Each new version contradicts the previous one. Clearly, the marketing folks who are writing the requirements, empowered though they might be, are not finding consensus.   At the same time, several new competing use case templates have been proposed by the various team members. Each template presents its own particularly creative way of delaying progress. The debates rage on. On March 1, Prudence Putrigence, the process proctor, succeeds in integrating all the competing use case forms and templates into a single, all-encompassing form. Just the blank form is 15 pages long. She has managed to include every field that appeared on all the competing templates. She also presents a 159- page document describing how to fill out the use case form. All current use cases must be rewritten according to the new standard.   You marvel to yourself that it now requires 15 pages of fill-in-the-blank and essay questions to answer the question: What should the system do when the user presses Return? The corporate process (authored by L. E. Ott, famed author of "Holistic Analysis: A Progressive Dialectic for Software Engineers") insists that you discover all primary use cases, 87 percent of all secondary use cases, and 36.274 percent of all tertiary use cases before you can complete analysis and enter the design phase. You have no idea what a tertiary use case is. So in an attempt to meet this requirement, you try to get your use case document reviewed by the marketing department, which you hope will know what a tertiary use case is.   Unfortunately, the marketing folks are too busy with sales support to talk to you. Indeed, since the project started, you have not been able to get a single meeting with marketing, which has provided a never-ending stream of changing and contradictory requirements documents.   While one team has been spinning endlessly on the use case document, another team has been working out the domain model. Endless variations of UML documents are pouring out of this team. Every week, the model is reworked.   The team members can't decide whether to use <<interfaces>> or <<types>> in the model. A huge disagreement has been raging on the proper syntax and application of OCL. Others on the team just got back from a 5-day class on catabolism, and have been producing incredibly detailed and arcane diagrams that nobody else can fathom.   On March 27, with one week to go before analysis is to be complete, you have produced a sea of documents and diagrams but are no closer to a cogent analysis of the problem than you were on January 3. **** And then, a miracle happens.   **** On Saturday, April 1, you check your e-mail from home. You see a memo from your boss to BB. It states unequivocally that you are done with the analysis! You phone your boss and complain. "How could you have told BB that we were done with the analysis?" "Have you looked at a calendar lately?" he responds. "It's April 1!" The irony of that date does not escape you. "But we have so much more to think about. So much more to analyze! We haven't even decided whether to use <<extends>> or <<precedes>>!" "Where is your evidence that you are not done?" inquires your boss, impatiently. "Whaaa . . . ." But he cuts you off. "Analysis can go on forever; it has to be stopped at some point. And since this is the date it was scheduled to stop, it has been stopped. Now, on Monday, I want you to gather up all existing analysis materials and put them into a public folder. Release that folder to Prudence so that she can log it in the CM system by Monday afternoon. Then get busy and start designing."   As you hang up the phone, you begin to consider the benefits of keeping a bottle of bourbon in your bottom desk drawer. They threw a party to celebrate the on-time completion of the analysis phase. BB gave a colon-stirring speech on empowerment. And your boss, another 3 mm taller, congratulated his team on the incredible show of unity and teamwork. Finally, the CIO takes the stage to tell everyone that the SEI audit went very well and to thank everyone for studying and shredding the evaluation guides that were passed out. Level 3 now seems assured and will be awarded by June. (Scuttlebutt has it that managers at the level of BB and above are to receive significant bonuses once the SEI awards level 3.)   As the weeks flow by, you and your team work on the design of the system. Of course, you find that the analysis that the design is supposedly based on is flawedno, useless; no, worse than useless. But when you tell your boss that you need to go back and work some more on the analysis to shore up its weaker sections, he simply states, "The analysis phase is over. The only allowable activity is design. Now get back to it."   So, you and your team hack the design as best you can, unsure of whether the requirements have been properly analyzed. Of course, it really doesn't matter much, since the requirements document is still thrashing with weekly revisions, and the marketing department still refuses to meet with you.     The design is a nightmare. Your boss recently misread a book named The Finish Line in which the author, Mark DeThomaso, blithely suggested that design documents should be taken down to code-level detail. "If we are going to be working at that level of detail," you ask, "why don't we simply write the code instead?" "Because then you wouldn't be designing, of course. And the only allowable activity in the design phase is design!" "Besides," he continues, "we have just purchased a companywide license for Dandelion! This tool enables 'Round the Horn Engineering!' You are to transfer all design diagrams into this tool. It will automatically generate our code for us! It will also keep the design diagrams in sync with the code!" Your boss hands you a brightly colored shrinkwrapped box containing the Dandelion distribution. You accept it numbly and shuffle off to your cubicle. Twelve hours, eight crashes, one disk reformatting, and eight shots of 151 later, you finally have the tool installed on your server. You consider the week your team will lose while attending Dandelion training. Then you smile and think, "Any week I'm not here is a good week." Design diagram after design diagram is created by your team. Dandelion makes it very difficult to draw these diagrams. There are dozens and dozens of deeply nested dialog boxes with funny text fields and check boxes that must all be filled in correctly. And then there's the problem of moving classes between packages. At first, these diagram are driven from the use cases. But the requirements are changing so often that the use cases rapidly become meaningless. Debates rage about whether VISITOR or DECORATOR design patterns should be used. One developer refuses to use VISITOR in any form, claiming that it's not a properly object-oriented construct. Someone refuses to use multiple inheritance, since it is the spawn of the devil. Review meetings rapidly degenerate into debates about the meaning of object orientation, the definition of analysis versus design, or when to use aggregation versus association. Midway through the design cycle, the marketing folks announce that they have rethought the focus of the system. Their new requirements document is completely restructured. They have eliminated several major feature areas and replaced them with feature areas that they anticipate customer surveys will show to be more appropriate. You tell your boss that these changes mean that you need to reanalyze and redesign much of the system. But he says, "The analysis phase is system. But he says, "The analysis phase is over. The only allowable activity is design. Now get back to it."   You suggest that it might be better to create a simple prototype to show to the marketing folks and even some potential customers. But your boss says, "The analysis phase is over. The only allowable activity is design. Now get back to it." Hack, hack, hack, hack. You try to create some kind of a design document that might reflect the new requirements documents. However, the revolution of the requirements has not caused them to stop thrashing. Indeed, if anything, the wild oscillations of the requirements document have only increased in frequency and amplitude.   You slog your way through them.   On June 15, the Dandelion database gets corrupted. Apparently, the corruption has been progressive. Small errors in the DB accumulated over the months into bigger and bigger errors. Eventually, the CASE tool just stopped working. Of course, the slowly encroaching corruption is present on all the backups. Calls to the Dandelion technical support line go unanswered for several days. Finally, you receive a brief e-mail from Dandelion, informing you that this is a known problem and that the solution is to purchase the new version, which they promise will be ready some time next quarter, and then reenter all the diagrams by hand.   ****   Then, on July 1 another miracle happens! You are done with the design!   Rather than go to your boss and complain, you stock your middle desk drawer with some vodka.   **** They threw a party to celebrate the on-time completion of the design phase and their graduation to CMM level 3. This time, you find BB's speech so stirring that you have to use the restroom before it begins. New banners and plaques are all over your workplace. They show pictures of eagles and mountain climbers, and they talk about teamwork and empowerment. They read better after a few scotches. That reminds you that you need to clear out your file cabinet to make room for the brandy. You and your team begin to code. But you rapidly discover that the design is lacking in some significant areas. Actually, it's lacking any significance at all. You convene a design session in one of the conference rooms to try to work through some of the nastier problems. But your boss catches you at it and disbands the meeting, saying, "The design phase is over. The only allowable activity is coding. Now get back to it."   ****   The code generated by Dandelion is really hideous. It turns out that you and your team were using association and aggregation the wrong way, after all. All the generated code has to be edited to correct these flaws. Editing this code is extremely difficult because it has been instrumented with ugly comment blocks that have special syntax that Dandelion needs in order to keep the diagrams in sync with the code. If you accidentally alter one of these comments, the diagrams will be regenerated incorrectly. It turns out that "Round the Horn Engineering" requires an awful lot of effort. The more you try to keep the code compatible with Dandelion, the more errors Dandelion generates. In the end, you give up and decide to keep the diagrams up to date manually. A second later, you decide that there's no point in keeping the diagrams up to date at all. Besides, who has time?   Your boss hires a consultant to build tools to count the number of lines of code that are being produced. He puts a big thermometer graph on the wall with the number 1,000,000 on the top. Every day, he extends the red line to show how many lines have been added. Three days after the thermometer appears on the wall, your boss stops you in the hall. "That graph isn't growing quickly enough. We need to have a million lines done by October 1." "We aren't even sh-sh-sure that the proshect will require a m-million linezh," you blather. "We have to have a million lines done by October 1," your boss reiterates. His points have grown again, and the Grecian formula he uses on them creates an aura of authority and competence. "Are you sure your comment blocks are big enough?" Then, in a flash of managerial insight, he says, "I have it! I want you to institute a new policy among the engineers. No line of code is to be longer than 20 characters. Any such line must be split into two or more preferably more. All existing code needs to be reworked to this standard. That'll get our line count up!"   You decide not to tell him that this will require two unscheduled work months. You decide not to tell him anything at all. You decide that intravenous injections of pure ethanol are the only solution. You make the appropriate arrangements. Hack, hack, hack, and hack. You and your team madly code away. By August 1, your boss, frowning at the thermometer on the wall, institutes a mandatory 50-hour workweek.   Hack, hack, hack, and hack. By September 1st, the thermometer is at 1.2 million lines and your boss asks you to write a report describing why you exceeded the coding budget by 20 percent. He institutes mandatory Saturdays and demands that the project be brought back down to a million lines. You start a campaign of remerging lines. Hack, hack, hack, and hack. Tempers are flaring; people are quitting; QA is raining trouble reports down on you. Customers are demanding installation and user manuals; salespeople are demanding advance demonstrations for special customers; the requirements document is still thrashing, the marketing folks are complaining that the product isn't anything like they specified, and the liquor store won't accept your credit card anymore. Something has to give.    On September 15, BB calls a meeting. As he enters the room, his points are emitting clouds of steam. When he speaks, the bass overtones of his carefully manicured voice cause the pit of your stomach to roll over. "The QA manager has told me that this project has less than 50 percent of the required features implemented. He has also informed me that the system crashes all the time, yields wrong results, and is hideously slow. He has also complained that he cannot keep up with the continuous train of daily releases, each more buggy than the last!" He stops for a few seconds, visibly trying to compose himself. "The QA manager estimates that, at this rate of development, we won't be able to ship the product until December!" Actually, you think it's more like March, but you don't say anything. "December!" BB roars with such derision that people duck their heads as though he were pointing an assault rifle at them. "December is absolutely out of the question. Team leaders, I want new estimates on my desk in the morning. I am hereby mandating 65-hour work weeks until this project is complete. And it better be complete by November 1."   As he leaves the conference room, he is heard to mutter: "Empowermentbah!" * * * Your boss is bald; his points are mounted on BB's wall. The fluorescent lights reflecting off his pate momentarily dazzle you. "Do you have anything to drink?" he asks. Having just finished your last bottle of Boone's Farm, you pull a bottle of Thunderbird from your bookshelf and pour it into his coffee mug. "What's it going to take to get this project done? " he asks. "We need to freeze the requirements, analyze them, design them, and then implement them," you say callously. "By November 1?" your boss exclaims incredulously. "No way! Just get back to coding the damned thing." He storms out, scratching his vacant head.   A few days later, you find that your boss has been transferred to the corporate research division. Turnover has skyrocketed. Customers, informed at the last minute that their orders cannot be fulfilled on time, have begun to cancel their orders. Marketing is re-evaluating whether this product aligns with the overall goals of the company. Memos fly, heads roll, policies change, and things are, overall, pretty grim. Finally, by March, after far too many sixty-five hour weeks, a very shaky version of the software is ready. In the field, bug-discovery rates are high, and the technical support staff are at their wits' end, trying to cope with the complaints and demands of the irate customers. Nobody is happy.   In April, BB decides to buy his way out of the problem by licensing a product produced by Rupert Industries and redistributing it. The customers are mollified, the marketing folks are smug, and you are laid off.     Rupert Industries: Project Alpha   Your name is Robert. The date is January 3, 2001. The quiet hours spent with your family this holiday have left you refreshed and ready for work. You are sitting in a conference room with your team of professionals. The manager of the division called the meeting. "We have some ideas for a new project," says the division manager. Call him Russ. He is a high-strung British chap with more energy than a fusion reactor. He is ambitious and driven but understands the value of a team. Russ describes the essence of the new market opportunity the company has identified and introduces you to Jane, the marketing manager, who is responsible for defining the products that will address it. Addressing you, Jane says, "We'd like to start defining our first product offering as soon as possible. When can you and your team meet with me?" You reply, "We'll be done with the current iteration of our project this Friday. We can spare a few hours for you between now and then. After that, we'll take a few people from the team and dedicate them to you. We'll begin hiring their replacements and the new people for your team immediately." "Great," says Russ, "but I want you to understand that it is critical that we have something to exhibit at the trade show coming up this July. If we can't be there with something significant, we'll lose the opportunity."   "I understand," you reply. "I don't yet know what it is that you have in mind, but I'm sure we can have something by July. I just can't tell you what that something will be right now. In any case, you and Jane are going to have complete control over what we developers do, so you can rest assured that by July, you'll have the most important things that can be accomplished in that time ready to exhibit."   Russ nods in satisfaction. He knows how this works. Your team has always kept him advised and allowed him to steer their development. He has the utmost confidence that your team will work on the most important things first and will produce a high-quality product.   * * *   "So, Robert," says Jane at their first meeting, "How does your team feel about being split up?" "We'll miss working with each other," you answer, "but some of us were getting pretty tired of that last project and are looking forward to a change. So, what are you people cooking up?" Jane beams. "You know how much trouble our customers currently have . . ." And she spends a half hour or so describing the problem and possible solution. "OK, wait a second" you respond. "I need to be clear about this." And so you and Jane talk about how this system might work. Some of her ideas aren't fully formed. You suggest possible solutions. She likes some of them. You continue discussing.   During the discussion, as each new topic is addressed, Jane writes user story cards. Each card represents something that the new system has to do. The cards accumulate on the table and are spread out in front of you. Both you and Jane point at them, pick them up, and make notes on them as you discuss the stories. The cards are powerful mnemonic devices that you can use to represent complex ideas that are barely formed.   At the end of the meeting, you say, "OK, I've got a general idea of what you want. I'm going to talk to the team about it. I imagine they'll want to run some experiments with various database structures and presentation formats. Next time we meet, it'll be as a group, and we'll start identifying the most important features of the system."   A week later, your nascent team meets with Jane. They spread the existing user story cards out on the table and begin to get into some of the details of the system. The meeting is very dynamic. Jane presents the stories in the order of their importance. There is much discussion about each one. The developers are concerned about keeping the stories small enough to estimate and test. So they continually ask Jane to split one story into several smaller stories. Jane is concerned that each story have a clear business value and priority, so as she splits them, she makes sure that this stays true.   The stories accumulate on the table. Jane writes them, but the developers make notes on them as needed. Nobody tries to capture everything that is said; the cards are not meant to capture everything but are simply reminders of the conversation.   As the developers become more comfortable with the stories, they begin writing estimates on them. These estimates are crude and budgetary, but they give Jane an idea of what the story will cost.   At the end of the meeting, it is clear that many more stories could be discussed. It is also clear that the most important stories have been addressed and that they represent several months worth of work. Jane closes the meeting by taking the cards with her and promising to have a proposal for the first release in the morning.   * * *   The next morning, you reconvene the meeting. Jane chooses five cards and places them on the table. "According to your estimates, these cards represent about one perfect team-week's worth of work. The last iteration of the previous project managed to get one perfect team-week done in 3 real weeks. If we can get these five stories done in 3 weeks, we'll be able to demonstrate them to Russ. That will make him feel very comfortable about our progress." Jane is pushing it. The sheepish look on her face lets you know that she knows it too. You reply, "Jane, this is a new team, working on a new project. It's a bit presumptuous to expect that our velocity will be the same as the previous team's. However, I met with the team yesterday afternoon, and we all agreed that our initial velocity should, in fact, be set to one perfectweek for every 3 real-weeks. So you've lucked out on this one." "Just remember," you continue, "that the story estimates and the story velocity are very tentative at this point. We'll learn more when we plan the iteration and even more when we implement it."   Jane looks over her glasses at you as if to say "Who's the boss around here, anyway?" and then smiles and says, "Yeah, don't worry. I know the drill by now."Jane then puts 15 more cards on the table. She says, "If we can get all these cards done by the end of March, we can turn the system over to our beta test customers. And we'll get good feedback from them."   You reply, "OK, so we've got our first iteration defined, and we have the stories for the next three iterations after that. These four iterations will make our first release."   "So," says Jane, can you really do these five stories in the next 3 weeks?" "I don't know for sure, Jane," you reply. "Let's break them down into tasks and see what we get."   So Jane, you, and your team spend the next several hours taking each of the five stories that Jane chose for the first iteration and breaking them down into small tasks. The developers quickly realize that some of the tasks can be shared between stories and that other tasks have commonalities that can probably be taken advantage of. It is clear that potential designs are popping into the developers' heads. From time to time, they form little discussion knots and scribble UML diagrams on some cards.   Soon, the whiteboard is filled with the tasks that, once completed, will implement the five stories for this iteration. You start the sign-up process by saying, "OK, let's sign up for these tasks." "I'll take the initial database generation." Says Pete. "That's what I did on the last project, and this doesn't look very different. I estimate it at two of my perfect workdays." "OK, well, then, I'll take the login screen," says Joe. "Aw, darn," says Elaine, the junior member of the team, "I've never done a GUI, and kinda wanted to try that one."   "Ah, the impatience of youth," Joe says sagely, with a wink in your direction. "You can assist me with it, young Jedi." To Jane: "I think it'll take me about three of my perfect workdays."   One by one, the developers sign up for tasks and estimate them in terms of their own perfect workdays. Both you and Jane know that it is best to let the developers volunteer for tasks than to assign the tasks to them. You also know full well that you daren't challenge any of the developers' estimates. You know these people, and you trust them. You know that they are going to do the very best they can.   The developers know that they can't sign up for more perfect workdays than they finished in the last iteration they worked on. Once each developer has filled his or her schedule for the iteration, they stop signing up for tasks.   Eventually, all the developers have stopped signing up for tasks. But, of course, tasks are still left on the board.   "I was worried that that might happen," you say, "OK, there's only one thing to do, Jane. We've got too much to do in this iteration. What stories or tasks can we remove?" Jane sighs. She knows that this is the only option. Working overtime at the beginning of a project is insane, and projects where she's tried it have not fared well.   So Jane starts to remove the least-important functionality. "Well, we really don't need the login screen just yet. We can simply start the system in the logged-in state." "Rats!" cries Elaine. "I really wanted to do that." "Patience, grasshopper." says Joe. "Those who wait for the bees to leave the hive will not have lips too swollen to relish the honey." Elaine looks confused. Everyone looks confused. "So . . .," Jane continues, "I think we can also do away with . . ." And so, bit by bit, the list of tasks shrinks. Developers who lose a task sign up for one of the remaining ones.   The negotiation is not painless. Several times, Jane exhibits obvious frustration and impatience. Once, when tensions are especially high, Elaine volunteers, "I'll work extra hard to make up some of the missing time." You are about to correct her when, fortunately, Joe looks her in the eye and says, "When once you proceed down the dark path, forever will it dominate your destiny."   In the end, an iteration acceptable to Jane is reached. It's not what Jane wanted. Indeed, it is significantly less. But it's something the team feels that can be achieved in the next 3 weeks.   And, after all, it still addresses the most important things that Jane wanted in the iteration. "So, Jane," you say when things had quieted down a bit, "when can we expect acceptance tests from you?" Jane sighs. This is the other side of the coin. For every story the development team implements,   Jane must supply a suite of acceptance tests that prove that it works. And the team needs these long before the end of the iteration, since they will certainly point out differences in the way Jane and the developers imagine the system's behaviour.   "I'll get you some example test scripts today," Jane promises. "I'll add to them every day after that. You'll have the entire suite by the middle of the iteration."   * * *   The iteration begins on Monday morning with a flurry of Class, Responsibilities, Collaborators sessions. By midmorning, all the developers have assembled into pairs and are rapidly coding away. "And now, my young apprentice," Joe says to Elaine, "you shall learn the mysteries of test-first design!"   "Wow, that sounds pretty rad," Elaine replies. "How do you do it?" Joe beams. It's clear that he has been anticipating this moment. "OK, what does the code do right now?" "Huh?" replied Elaine, "It doesn't do anything at all; there is no code."   "So, consider our task; can you think of something the code should do?" "Sure," Elaine said with youthful assurance, "First, it should connect to the database." "And thereupon, what must needs be required to connecteth the database?" "You sure talk weird," laughed Elaine. "I think we'd have to get the database object from some registry and call the Connect() method. "Ah, astute young wizard. Thou perceives correctly that we requireth an object within which we can cacheth the database object." "Is 'cacheth' really a word?" "It is when I say it! So, what test can we write that we know the database registry should pass?" Elaine sighs. She knows she'll just have to play along. "We should be able to create a database object and pass it to the registry in a Store() method. And then we should be able to pull it out of the registry with a Get() method and make sure it's the same object." "Oh, well said, my prepubescent sprite!" "Hay!" "So, now, let's write a test function that proves your case." "But shouldn't we write the database object and registry object first?" "Ah, you've much to learn, my young impatient one. Just write the test first." "But it won't even compile!" "Are you sure? What if it did?" "Uh . . ." "Just write the test, Elaine. Trust me." And so Joe, Elaine, and all the other developers began to code their tasks, one test case at a time. The room in which they worked was abuzz with the conversations between the pairs. The murmur was punctuated by an occasional high five when a pair managed to finish a task or a difficult test case.   As development proceeded, the developers changed partners once or twice a day. Each developer got to see what all the others were doing, and so knowledge of the code spread generally throughout the team.   Whenever a pair finished something significant whether a whole task or simply an important part of a task they integrated what they had with the rest of the system. Thus, the code base grew daily, and integration difficulties were minimized.   The developers communicated with Jane on a daily basis. They'd go to her whenever they had a question about the functionality of the system or the interpretation of an acceptance test case.   Jane, good as her word, supplied the team with a steady stream of acceptance test scripts. The team read these carefully and thereby gained a much better understanding of what Jane expected the system to do. By the beginning of the second week, there was enough functionality to demonstrate to Jane. She watched eagerly as the demonstration passed test case after test case. "This is really cool," Jane said as the demonstration finally ended. "But this doesn't seem like one-third of the tasks. Is your velocity slower than anticipated?"   You grimace. You'd been waiting for a good time to mention this to Jane but now she was forcing the issue. "Yes, unfortunately, we are going more slowly than we had expected. The new application server we are using is turning out to be a pain to configure. Also, it takes forever to reboot, and we have to reboot it whenever we make even the slightest change to its configuration."   Jane eyes you with suspicion. The stress of last Monday's negotiations had still not entirely dissipated. She says, "And what does this mean to our schedule? We can't slip it again, we just can't. Russ will have a fit! He'll haul us all into the woodshed and ream us some new ones."   You look Jane right in the eyes. There's no pleasant way to give someone news like this. So you just blurt out, "Look, if things keep going like they're going, we're not going to be done with everything by next Friday. Now it's possible that we'll figure out a way to go faster. But, frankly, I wouldn't depend on that. You should start thinking about one or two tasks that could be removed from the iteration without ruining the demonstration for Russ. Come hell or high water, we are going to give that demonstration on Friday, and I don't think you want us to choose which tasks to omit."   "Aw forchrisakes!" Jane barely manages to stifle yelling that last word as she stalks away, shaking her head. Not for the first time, you say to yourself, "Nobody ever promised me project management would be easy." You are pretty sure it won't be the last time, either.   Actually, things went a bit better than you had hoped. The team did, in fact, have to drop one task from the iteration, but Jane had chosen wisely, and the demonstration for Russ went without a hitch. Russ was not impressed with the progress, but neither was he dismayed. He simply said, "This is pretty good. But remember, we have to be able to demonstrate this system at the trade show in July, and at this rate, it doesn't look like you'll have all that much to show." Jane, whose attitude had improved dramatically with the completion of the iteration, responded to Russ by saying, "Russ, this team is working hard, and well. When July comes around, I am confident that we'll have something significant to demonstrate. It won't be everything, and some of it may be smoke and mirrors, but we'll have something."   Painful though the last iteration was, it had calibrated your velocity numbers. The next iteration went much better. Not because your team got more done than in the last iteration but simply because the team didn't have to remove any tasks or stories in the middle of the iteration.   By the start of the fourth iteration, a natural rhythm has been established. Jane, you, and the team know exactly what to expect from one another. The team is running hard, but the pace is sustainable. You are confident that the team can keep up this pace for a year or more.   The number of surprises in the schedule diminishes to near zero; however, the number of surprises in the requirements does not. Jane and Russ frequently look over the growing system and make recommendations or changes to the existing functionality. But all parties realize that these changes take time and must be scheduled. So the changes do not cause anyone's expectations to be violated. In March, there is a major demonstration of the system to the board of directors. The system is very limited and is not yet in a form good enough to take to the trade show, but progress is steady, and the board is reasonably impressed.   The second release goes even more smoothly than the first. By now, the team has figured out a way to automate Jane's acceptance test scripts. The team has also refactored the design of the system to the point that it is really easy to add new features and change old ones. The second release was done by the end of June and was taken to the trade show. It had less in it than Jane and Russ would have liked, but it did demonstrate the most important features of the system. Although customers at the trade show noticed that certain features were missing, they were very impressed overall. You, Russ, and Jane all returned from the trade show with smiles on your faces. You all felt as though this project was a winner.   Indeed, many months later, you are contacted by Rufus Inc. That company had been working on a system like this for its internal operations. Rufus has canceled the development of that system after a death-march project and is negotiating to license your technology for its environment.   Indeed, things are looking up!

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • New features of C# 4.0

    This article covers New features of C# 4.0. Article has been divided into below sections. Introduction. Dynamic Lookup. Named and Optional Arguments. Features for COM interop. Variance. Relationship with Visual Basic. Resources. Other interested readings… 22 New Features of Visual Studio 2008 for .NET Professionals 50 New Features of SQL Server 2008 IIS 7.0 New features Introduction It is now close to a year since Microsoft Visual C# 3.0 shipped as part of Visual Studio 2008. In the VS Managed Languages team we are hard at work on creating the next version of the language (with the unsurprising working title of C# 4.0), and this document is a first public description of the planned language features as we currently see them. Please be advised that all this is in early stages of production and is subject to change. Part of the reason for sharing our plans in public so early is precisely to get the kind of feedback that will cause us to improve the final product before it rolls out. Simultaneously with the publication of this whitepaper, a first public CTP (community technology preview) of Visual Studio 2010 is going out as a Virtual PC image for everyone to try. Please use it to play and experiment with the features, and let us know of any thoughts you have. We ask for your understanding and patience working with very early bits, where especially new or newly implemented features do not have the quality or stability of a final product. The aim of the CTP is not to give you a productive work environment but to give you the best possible impression of what we are working on for the next release. The CTP contains a number of walkthroughs, some of which highlight the new language features of C# 4.0. Those are excellent for getting a hands-on guided tour through the details of some common scenarios for the features. You may consider this whitepaper a companion document to these walkthroughs, complementing them with a focus on the overall language features and how they work, as opposed to the specifics of the concrete scenarios. C# 4.0 The major theme for C# 4.0 is dynamic programming. Increasingly, objects are “dynamic” in the sense that their structure and behavior is not captured by a static type, or at least not one that the compiler knows about when compiling your program. Some examples include a. objects from dynamic programming languages, such as Python or Ruby b. COM objects accessed through IDispatch c. ordinary .NET types accessed through reflection d. objects with changing structure, such as HTML DOM objects While C# remains a statically typed language, we aim to vastly improve the interaction with such objects. A secondary theme is co-evolution with Visual Basic. Going forward we will aim to maintain the individual character of each language, but at the same time important new features should be introduced in both languages at the same time. They should be differentiated more by style and feel than by feature set. The new features in C# 4.0 fall into four groups: Dynamic lookup Dynamic lookup allows you to write method, operator and indexer calls, property and field accesses, and even object invocations which bypass the C# static type checking and instead gets resolved at runtime. Named and optional parameters Parameters in C# can now be specified as optional by providing a default value for them in a member declaration. When the member is invoked, optional arguments can be omitted. Furthermore, any argument can be passed by parameter name instead of position. COM specific interop features Dynamic lookup as well as named and optional parameters both help making programming against COM less painful than today. On top of that, however, we are adding a number of other small features that further improve the interop experience. Variance It used to be that an IEnumerable<string> wasn’t an IEnumerable<object>. Now it is – C# embraces type safe “co-and contravariance” and common BCL types are updated to take advantage of that. Dynamic Lookup Dynamic lookup allows you a unified approach to invoking things dynamically. With dynamic lookup, when you have an object in your hand you do not need to worry about whether it comes from COM, IronPython, the HTML DOM or reflection; you just apply operations to it and leave it to the runtime to figure out what exactly those operations mean for that particular object. This affords you enormous flexibility, and can greatly simplify your code, but it does come with a significant drawback: Static typing is not maintained for these operations. A dynamic object is assumed at compile time to support any operation, and only at runtime will you get an error if it wasn’t so. Oftentimes this will be no loss, because the object wouldn’t have a static type anyway, in other cases it is a tradeoff between brevity and safety. In order to facilitate this tradeoff, it is a design goal of C# to allow you to opt in or opt out of dynamic behavior on every single call. The dynamic type C# 4.0 introduces a new static type called dynamic. When you have an object of type dynamic you can “do things to it” that are resolved only at runtime: dynamic d = GetDynamicObject(…); d.M(7); The C# compiler allows you to call a method with any name and any arguments on d because it is of type dynamic. At runtime the actual object that d refers to will be examined to determine what it means to “call M with an int” on it. The type dynamic can be thought of as a special version of the type object, which signals that the object can be used dynamically. It is easy to opt in or out of dynamic behavior: any object can be implicitly converted to dynamic, “suspending belief” until runtime. Conversely, there is an “assignment conversion” from dynamic to any other type, which allows implicit conversion in assignment-like constructs: dynamic d = 7; // implicit conversion int i = d; // assignment conversion Dynamic operations Not only method calls, but also field and property accesses, indexer and operator calls and even delegate invocations can be dispatched dynamically: dynamic d = GetDynamicObject(…); d.M(7); // calling methods d.f = d.P; // getting and settings fields and properties d[“one”] = d[“two”]; // getting and setting thorugh indexers int i = d + 3; // calling operators string s = d(5,7); // invoking as a delegate The role of the C# compiler here is simply to package up the necessary information about “what is being done to d”, so that the runtime can pick it up and determine what the exact meaning of it is given an actual object d. Think of it as deferring part of the compiler’s job to runtime. The result of any dynamic operation is itself of type dynamic. Runtime lookup At runtime a dynamic operation is dispatched according to the nature of its target object d: COM objects If d is a COM object, the operation is dispatched dynamically through COM IDispatch. This allows calling to COM types that don’t have a Primary Interop Assembly (PIA), and relying on COM features that don’t have a counterpart in C#, such as indexed properties and default properties. Dynamic objects If d implements the interface IDynamicObject d itself is asked to perform the operation. Thus by implementing IDynamicObject a type can completely redefine the meaning of dynamic operations. This is used intensively by dynamic languages such as IronPython and IronRuby to implement their own dynamic object models. It will also be used by APIs, e.g. by the HTML DOM to allow direct access to the object’s properties using property syntax. Plain objects Otherwise d is a standard .NET object, and the operation will be dispatched using reflection on its type and a C# “runtime binder” which implements C#’s lookup and overload resolution semantics at runtime. This is essentially a part of the C# compiler running as a runtime component to “finish the work” on dynamic operations that was deferred by the static compiler. Example Assume the following code: dynamic d1 = new Foo(); dynamic d2 = new Bar(); string s; d1.M(s, d2, 3, null); Because the receiver of the call to M is dynamic, the C# compiler does not try to resolve the meaning of the call. Instead it stashes away information for the runtime about the call. This information (often referred to as the “payload”) is essentially equivalent to: “Perform an instance method call of M with the following arguments: 1. a string 2. a dynamic 3. a literal int 3 4. a literal object null” At runtime, assume that the actual type Foo of d1 is not a COM type and does not implement IDynamicObject. In this case the C# runtime binder picks up to finish the overload resolution job based on runtime type information, proceeding as follows: 1. Reflection is used to obtain the actual runtime types of the two objects, d1 and d2, that did not have a static type (or rather had the static type dynamic). The result is Foo for d1 and Bar for d2. 2. Method lookup and overload resolution is performed on the type Foo with the call M(string,Bar,3,null) using ordinary C# semantics. 3. If the method is found it is invoked; otherwise a runtime exception is thrown. Overload resolution with dynamic arguments Even if the receiver of a method call is of a static type, overload resolution can still happen at runtime. This can happen if one or more of the arguments have the type dynamic: Foo foo = new Foo(); dynamic d = new Bar(); var result = foo.M(d); The C# runtime binder will choose between the statically known overloads of M on Foo, based on the runtime type of d, namely Bar. The result is again of type dynamic. The Dynamic Language Runtime An important component in the underlying implementation of dynamic lookup is the Dynamic Language Runtime (DLR), which is a new API in .NET 4.0. The DLR provides most of the infrastructure behind not only C# dynamic lookup but also the implementation of several dynamic programming languages on .NET, such as IronPython and IronRuby. Through this common infrastructure a high degree of interoperability is ensured, but just as importantly the DLR provides excellent caching mechanisms which serve to greatly enhance the efficiency of runtime dispatch. To the user of dynamic lookup in C#, the DLR is invisible except for the improved efficiency. However, if you want to implement your own dynamically dispatched objects, the IDynamicObject interface allows you to interoperate with the DLR and plug in your own behavior. This is a rather advanced task, which requires you to understand a good deal more about the inner workings of the DLR. For API writers, however, it can definitely be worth the trouble in order to vastly improve the usability of e.g. a library representing an inherently dynamic domain. Open issues There are a few limitations and things that might work differently than you would expect. · The DLR allows objects to be created from objects that represent classes. However, the current implementation of C# doesn’t have syntax to support this. · Dynamic lookup will not be able to find extension methods. Whether extension methods apply or not depends on the static context of the call (i.e. which using clauses occur), and this context information is not currently kept as part of the payload. · Anonymous functions (i.e. lambda expressions) cannot appear as arguments to a dynamic method call. The compiler cannot bind (i.e. “understand”) an anonymous function without knowing what type it is converted to. One consequence of these limitations is that you cannot easily use LINQ queries over dynamic objects: dynamic collection = …; var result = collection.Select(e => e + 5); If the Select method is an extension method, dynamic lookup will not find it. Even if it is an instance method, the above does not compile, because a lambda expression cannot be passed as an argument to a dynamic operation. There are no plans to address these limitations in C# 4.0. Named and Optional Arguments Named and optional parameters are really two distinct features, but are often useful together. Optional parameters allow you to omit arguments to member invocations, whereas named arguments is a way to provide an argument using the name of the corresponding parameter instead of relying on its position in the parameter list. Some APIs, most notably COM interfaces such as the Office automation APIs, are written specifically with named and optional parameters in mind. Up until now it has been very painful to call into these APIs from C#, with sometimes as many as thirty arguments having to be explicitly passed, most of which have reasonable default values and could be omitted. Even in APIs for .NET however you sometimes find yourself compelled to write many overloads of a method with different combinations of parameters, in order to provide maximum usability to the callers. Optional parameters are a useful alternative for these situations. Optional parameters A parameter is declared optional simply by providing a default value for it: public void M(int x, int y = 5, int z = 7); Here y and z are optional parameters and can be omitted in calls: M(1, 2, 3); // ordinary call of M M(1, 2); // omitting z – equivalent to M(1, 2, 7) M(1); // omitting both y and z – equivalent to M(1, 5, 7) Named and optional arguments C# 4.0 does not permit you to omit arguments between commas as in M(1,,3). This could lead to highly unreadable comma-counting code. Instead any argument can be passed by name. Thus if you want to omit only y from a call of M you can write: M(1, z: 3); // passing z by name or M(x: 1, z: 3); // passing both x and z by name or even M(z: 3, x: 1); // reversing the order of arguments All forms are equivalent, except that arguments are always evaluated in the order they appear, so in the last example the 3 is evaluated before the 1. Optional and named arguments can be used not only with methods but also with indexers and constructors. Overload resolution Named and optional arguments affect overload resolution, but the changes are relatively simple: A signature is applicable if all its parameters are either optional or have exactly one corresponding argument (by name or position) in the call which is convertible to the parameter type. Betterness rules on conversions are only applied for arguments that are explicitly given – omitted optional arguments are ignored for betterness purposes. If two signatures are equally good, one that does not omit optional parameters is preferred. M(string s, int i = 1); M(object o); M(int i, string s = “Hello”); M(int i); M(5); Given these overloads, we can see the working of the rules above. M(string,int) is not applicable because 5 doesn’t convert to string. M(int,string) is applicable because its second parameter is optional, and so, obviously are M(object) and M(int). M(int,string) and M(int) are both better than M(object) because the conversion from 5 to int is better than the conversion from 5 to object. Finally M(int) is better than M(int,string) because no optional arguments are omitted. Thus the method that gets called is M(int). Features for COM interop Dynamic lookup as well as named and optional parameters greatly improve the experience of interoperating with COM APIs such as the Office Automation APIs. In order to remove even more of the speed bumps, a couple of small COM-specific features are also added to C# 4.0. Dynamic import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object from context, but explicitly has to perform a cast on the returned value to make use of that knowledge. These casts are so common that they constitute a major nuisance. In order to facilitate a smoother experience, you can now choose to import these COM APIs in such a way that variants are instead represented using the type dynamic. In other words, from your point of view, COM signatures now have occurrences of dynamic instead of object in them. This means that you can easily access members directly off a returned object, or you can assign it to a strongly typed local variable without having to cast. To illustrate, you can now say excel.Cells[1, 1].Value = "Hello"; instead of ((Excel.Range)excel.Cells[1, 1]).Value2 = "Hello"; and Excel.Range range = excel.Cells[1, 1]; instead of Excel.Range range = (Excel.Range)excel.Cells[1, 1]; Compiling without PIAs Primary Interop Assemblies are large .NET assemblies generated from COM interfaces to facilitate strongly typed interoperability. They provide great support at design time, where your experience of the interop is as good as if the types where really defined in .NET. However, at runtime these large assemblies can easily bloat your program, and also cause versioning issues because they are distributed independently of your application. The no-PIA feature allows you to continue to use PIAs at design time without having them around at runtime. Instead, the C# compiler will bake the small part of the PIA that a program actually uses directly into its assembly. At runtime the PIA does not have to be loaded. Omitting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. Contrary to refs in C#, these are typically not meant to mutate a passed-in argument for the subsequent benefit of the caller, but are simply another way of passing value parameters. It therefore seems unreasonable that a C# programmer should have to create temporary variables for all such ref parameters and pass these by reference. Instead, specifically for COM methods, the C# compiler will allow you to pass arguments by value to such a method, and will automatically generate temporary variables to hold the passed-in values, subsequently discarding these when the call returns. In this way the caller sees value semantics, and will not experience any side effects, but the called method still gets a reference. Open issues A few COM interface features still are not surfaced in C#. Most notably these include indexed properties and default properties. As mentioned above these will be respected if you access COM dynamically, but statically typed C# code will still not recognize them. There are currently no plans to address these remaining speed bumps in C# 4.0. Variance An aspect of generics that often comes across as surprising is that the following is illegal: IList<string> strings = new List<string>(); IList<object> objects = strings; The second assignment is disallowed because strings does not have the same element type as objects. There is a perfectly good reason for this. If it were allowed you could write: objects[0] = 5; string s = strings[0]; Allowing an int to be inserted into a list of strings and subsequently extracted as a string. This would be a breach of type safety. However, there are certain interfaces where the above cannot occur, notably where there is no way to insert an object into the collection. Such an interface is IEnumerable<T>. If instead you say: IEnumerable<object> objects = strings; There is no way we can put the wrong kind of thing into strings through objects, because objects doesn’t have a method that takes an element in. Variance is about allowing assignments such as this in cases where it is safe. The result is that a lot of situations that were previously surprising now just work. Covariance In .NET 4.0 the IEnumerable<T> interface will be declared in the following way: public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IEnumerator { bool MoveNext(); T Current { get; } } The “out” in these declarations signifies that the T can only occur in output position in the interface – the compiler will complain otherwise. In return for this restriction, the interface becomes “covariant” in T, which means that an IEnumerable<A> is considered an IEnumerable<B> if A has a reference conversion to B. As a result, any sequence of strings is also e.g. a sequence of objects. This is useful e.g. in many LINQ methods. Using the declarations above: var result = strings.Union(objects); // succeeds with an IEnumerable<object> This would previously have been disallowed, and you would have had to to some cumbersome wrapping to get the two sequences to have the same element type. Contravariance Type parameters can also have an “in” modifier, restricting them to occur only in input positions. An example is IComparer<T>: public interface IComparer<in T> { public int Compare(T left, T right); } The somewhat baffling result is that an IComparer<object> can in fact be considered an IComparer<string>! It makes sense when you think about it: If a comparer can compare any two objects, it can certainly also compare two strings. This property is referred to as contravariance. A generic type can have both in and out modifiers on its type parameters, as is the case with the Func<…> delegate types: public delegate TResult Func<in TArg, out TResult>(TArg arg); Obviously the argument only ever comes in, and the result only ever comes out. Therefore a Func<object,string> can in fact be used as a Func<string,object>. Limitations Variant type parameters can only be declared on interfaces and delegate types, due to a restriction in the CLR. Variance only applies when there is a reference conversion between the type arguments. For instance, an IEnumerable<int> is not an IEnumerable<object> because the conversion from int to object is a boxing conversion, not a reference conversion. Also please note that the CTP does not contain the new versions of the .NET types mentioned above. In order to experiment with variance you have to declare your own variant interfaces and delegate types. COM Example Here is a larger Office automation example that shows many of the new C# features in action. using System; using System.Diagnostics; using System.Linq; using Excel = Microsoft.Office.Interop.Excel; using Word = Microsoft.Office.Interop.Word; class Program { static void Main(string[] args) { var excel = new Excel.Application(); excel.Visible = true; excel.Workbooks.Add(); // optional arguments omitted excel.Cells[1, 1].Value = "Process Name"; // no casts; Value dynamically excel.Cells[1, 2].Value = "Memory Usage"; // accessed var processes = Process.GetProcesses() .OrderByDescending(p =&gt; p.WorkingSet) .Take(10); int i = 2; foreach (var p in processes) { excel.Cells[i, 1].Value = p.ProcessName; // no casts excel.Cells[i, 2].Value = p.WorkingSet; // no casts i++; } Excel.Range range = excel.Cells[1, 1]; // no casts Excel.Chart chart = excel.ActiveWorkbook.Charts. Add(After: excel.ActiveSheet); // named and optional arguments chart.ChartWizard( Source: range.CurrentRegion, Title: "Memory Usage in " + Environment.MachineName); //named+optional chart.ChartStyle = 45; chart.CopyPicture(Excel.XlPictureAppearance.xlScreen, Excel.XlCopyPictureFormat.xlBitmap, Excel.XlPictureAppearance.xlScreen); var word = new Word.Application(); word.Visible = true; word.Documents.Add(); // optional arguments word.Selection.Paste(); } } The code is much more terse and readable than the C# 3.0 counterpart. Note especially how the Value property is accessed dynamically. This is actually an indexed property, i.e. a property that takes an argument; something which C# does not understand. However the argument is optional. Since the access is dynamic, it goes through the runtime COM binder which knows to substitute the default value and call the indexed property. Thus, dynamic COM allows you to avoid accesses to the puzzling Value2 property of Excel ranges. Relationship with Visual Basic A number of the features introduced to C# 4.0 already exist or will be introduced in some form or other in Visual Basic: · Late binding in VB is similar in many ways to dynamic lookup in C#, and can be expected to make more use of the DLR in the future, leading to further parity with C#. · Named and optional arguments have been part of Visual Basic for a long time, and the C# version of the feature is explicitly engineered with maximal VB interoperability in mind. · NoPIA and variance are both being introduced to VB and C# at the same time. VB in turn is adding a number of features that have hitherto been a mainstay of C#. As a result future versions of C# and VB will have much better feature parity, for the benefit of everyone. Resources All available resources concerning C# 4.0 can be accessed through the C# Dev Center. Specifically, this white paper and other resources can be found at the Code Gallery site. Enjoy! span.fullpost {display:none;}

    Read the article

  • Garbled text in Screen [closed]

    - by Prabin Dahal
    The graphical Interface in my system is garbled with some text. At the beginning I thought it was due to java and tomcat that I installed. But after removing java and tomcat, it is still the same. I am using ubuntu server and i have installed xfce desktop environment with oboard softkey I have added my dmesg output to this message. What is the problem here. I am not able to figure it out. Thank you for your help. Prabin [ 0.390936] usbcore: registered new interface driver usbfs [ 0.391006] usbcore: registered new interface driver hub [ 0.391147] usbcore: registered new device driver usb [ 0.391580] PCI: Using ACPI for IRQ routing [ 0.400509] PCI: pci_cache_line_size set to 64 bytes [ 0.400669] reserve RAM buffer: 000000000009ec00 - 000000000009ffff [ 0.400681] reserve RAM buffer: 000000007f597000 - 000000007fffffff [ 0.400699] reserve RAM buffer: 000000007f6f0000 - 000000007fffffff [ 0.401135] NetLabel: Initializing [ 0.401155] NetLabel: domain hash size = 128 [ 0.401168] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.401212] NetLabel: unlabeled traffic allowed by default [ 0.401466] HPET: 3 timers in total, 0 timers will be used for per-cpu timer [ 0.401494] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 0.401520] hpet0: 3 comparators, 64-bit 14.318180 MHz counter [ 0.408228] Switching to clocksource hpet [ 0.434341] AppArmor: AppArmor Filesystem Enabled [ 0.434447] pnp: PnP ACPI init [ 0.434531] ACPI: bus type pnp registered [ 0.434784] pnp 00:00: [bus 00-ff] [ 0.434794] pnp 00:00: [io 0x0cf8-0x0cff] [ 0.434804] pnp 00:00: [io 0x0000-0x0cf7 window] [ 0.434813] pnp 00:00: [io 0x0d00-0xffff window] [ 0.434822] pnp 00:00: [mem 0x000a0000-0x000bffff window] [ 0.434831] pnp 00:00: [mem 0x00000000 window] [ 0.434840] pnp 00:00: [mem 0x80000000-0xffffffff window] [ 0.435018] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active) [ 0.435526] pnp 00:01: [mem 0xe0000000-0xefffffff] [ 0.435537] pnp 00:01: [mem 0x7f700000-0x7f7fffff] [ 0.435545] pnp 00:01: [mem 0x7f800000-0x7fffffff] [ 0.435554] pnp 00:01: [mem 0xfee00000-0xfeefffff] [ 0.435727] system 00:01: [mem 0xe0000000-0xefffffff] has been reserved [ 0.435754] system 00:01: [mem 0x7f700000-0x7f7fffff] has been reserved [ 0.435775] system 00:01: [mem 0x7f800000-0x7fffffff] has been reserved [ 0.435796] system 00:01: [mem 0xfee00000-0xfeefffff] has been reserved [ 0.435818] system 00:01: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.436233] pnp 00:02: [io 0x0000-0xffffffffffffffff disabled] [ 0.436245] pnp 00:02: [io 0x0000-0xffffffffffffffff disabled] [ 0.436414] system 00:02: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.436512] pnp 00:03: [io 0x0060] [ 0.436521] pnp 00:03: [io 0x0064] [ 0.436548] pnp 00:03: [irq 1] [ 0.436682] pnp 00:03: Plug and Play ACPI device, IDs PNP0303 PNP030b (active) [ 0.436825] pnp 00:04: [irq 12] [ 0.436958] pnp 00:04: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active) [ 0.437835] pnp 00:05: [io 0x03f8-0x03ff] [ 0.437861] pnp 00:05: [irq 4] [ 0.437870] pnp 00:05: [dma 0 disabled] [ 0.438142] pnp 00:05: Plug and Play ACPI device, IDs PNP0501 (active) [ 0.439014] pnp 00:06: [io 0x02f8-0x02ff] [ 0.439036] pnp 00:06: [irq 3] [ 0.439045] pnp 00:06: [dma 0 disabled] [ 0.439297] pnp 00:06: Plug and Play ACPI device, IDs PNP0501 (active) [ 0.439346] pnp 00:07: [io 0x0000-0x000f] [ 0.439355] pnp 00:07: [io 0x0081-0x0083] [ 0.439363] pnp 00:07: [io 0x0087] [ 0.439371] pnp 00:07: [io 0x0089-0x008b] [ 0.439380] pnp 00:07: [io 0x008f] [ 0.439388] pnp 00:07: [io 0x00c0-0x00df] [ 0.439563] system 00:07: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.439617] pnp 00:08: [io 0x0070-0x0077] [ 0.439639] pnp 00:08: [irq 8] [ 0.439751] pnp 00:08: Plug and Play ACPI device, IDs PNP0b00 (active) [ 0.439788] pnp 00:09: [io 0x0061] [ 0.439893] pnp 00:09: Plug and Play ACPI device, IDs PNP0800 (active) [ 0.439977] pnp 00:0a: [io 0x0010-0x001f] [ 0.439986] pnp 00:0a: [io 0x0022-0x003f] [ 0.439994] pnp 00:0a: [io 0x0044-0x005f] [ 0.440055] pnp 00:0a: [io 0x0063] [ 0.440063] pnp 00:0a: [io 0x0065] [ 0.440071] pnp 00:0a: [io 0x0067-0x006f] [ 0.440079] pnp 00:0a: [io 0x0072-0x007f] [ 0.440086] pnp 00:0a: [io 0x0080] [ 0.440094] pnp 00:0a: [io 0x0084-0x0086] [ 0.440102] pnp 00:0a: [io 0x0088] [ 0.440109] pnp 00:0a: [io 0x008c-0x008e] [ 0.440117] pnp 00:0a: [io 0x0090-0x009f] [ 0.440125] pnp 00:0a: [io 0x00a2-0x00bf] [ 0.440133] pnp 00:0a: [io 0x00e0-0x00ef] [ 0.440141] pnp 00:0a: [io 0x04d0-0x04d1] [ 0.440150] pnp 00:0a: [io 0x0000-0xffffffffffffffff disabled] [ 0.440160] pnp 00:0a: [io 0x0000-0xffffffffffffffff disabled] [ 0.440168] pnp 00:0a: [io 0x03f4] [ 0.440175] pnp 00:0a: [io 0x03f5] [ 0.440183] pnp 00:0a: [io 0x0374] [ 0.440190] pnp 00:0a: [io 0x0375] [ 0.440405] system 00:0a: [io 0x04d0-0x04d1] has been reserved [ 0.440432] system 00:0a: [io 0x03f4] has been reserved [ 0.440451] system 00:0a: [io 0x03f5] has been reserved [ 0.440469] system 00:0a: [io 0x0374] has been reserved [ 0.440488] system 00:0a: [io 0x0375] has been reserved [ 0.440508] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.440550] pnp 00:0b: [io 0x00f0-0x00ff] [ 0.440572] pnp 00:0b: [irq 13] [ 0.440691] pnp 00:0b: Plug and Play ACPI device, IDs PNP0c04 (active) [ 0.440770] pnp 00:0c: [io 0x0810] [ 0.440779] pnp 00:0c: [io 0x0800-0x080f] [ 0.440787] pnp 00:0c: [io 0xffff] [ 0.440947] system 00:0c: [io 0x0810] has been reserved [ 0.440970] system 00:0c: [io 0x0800-0x080f] has been reserved [ 0.440989] system 00:0c: [io 0xffff] has been reserved [ 0.441010] system 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.441620] pnp 00:0d: [io 0x0900-0x097f] [ 0.441630] pnp 00:0d: [io 0x09c0-0x09ff] [ 0.441639] pnp 00:0d: [io 0x0400-0x043f] [ 0.441647] pnp 00:0d: [io 0x0480-0x04bf] [ 0.441656] pnp 00:0d: [mem 0xfec00000-0xfec85fff] [ 0.441664] pnp 00:0d: [mem 0xfed1c000-0xfed1ffff] [ 0.441673] pnp 00:0d: [mem 0x000c0000-0x000dffff] [ 0.441689] pnp 00:0d: [mem 0x000e0000-0x000effff] [ 0.441697] pnp 00:0d: [mem 0x000f0000-0x000fffff] [ 0.441706] pnp 00:0d: [mem 0xff800000-0xffffffff] [ 0.441911] system 00:0d: [io 0x0900-0x097f] has been reserved [ 0.441935] system 00:0d: [io 0x09c0-0x09ff] has been reserved [ 0.441955] system 00:0d: [io 0x0400-0x043f] has been reserved [ 0.441975] system 00:0d: [io 0x0480-0x04bf] has been reserved [ 0.441997] system 00:0d: [mem 0xfec00000-0xfec85fff] could not be reserved [ 0.442019] system 00:0d: [mem 0xfed1c000-0xfed1ffff] has been reserved [ 0.442040] system 00:0d: [mem 0x000c0000-0x000dffff] could not be reserved [ 0.442061] system 00:0d: [mem 0x000e0000-0x000effff] could not be reserved [ 0.442082] system 00:0d: [mem 0x000f0000-0x000fffff] could not be reserved [ 0.442103] system 00:0d: [mem 0xff800000-0xffffffff] has been reserved [ 0.442126] system 00:0d: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.442308] pnp 00:0e: [mem 0xfed00000-0xfed003ff] [ 0.442454] pnp 00:0e: Plug and Play ACPI device, IDs PNP0103 (active) [ 0.442569] pnp 00:0f: [mem 0x7f6f0000-0x7f6fffff] [ 0.442762] system 00:0f: [mem 0x7f6f0000-0x7f6fffff] has been reserved [ 0.442788] system 00:0f: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.443360] pnp: PnP ACPI: found 16 devices [ 0.443378] ACPI: ACPI bus type pnp unregistered [ 0.443395] PnPBIOS: Disabled by ACPI PNP [ 0.486106] PCI: max bus depth: 3 pci_try_num: 4 [ 0.486189] pci 0000:00:1c.0: PCI bridge to [bus 01-01] [ 0.486217] pci 0000:00:1c.0: bridge window [io 0xe000-0xefff] [ 0.486241] pci 0000:00:1c.0: bridge window [mem 0xd0100000-0xd01fffff] [ 0.486266] pci 0000:00:1c.0: bridge window [mem 0xff700000-0xff7fffff pref] [ 0.486298] pci 0000:03:01.0: PCI bridge to [bus 04-04] [ 0.486319] pci 0000:03:01.0: bridge window [io 0xd000-0xdfff] [ 0.486348] pci 0000:03:01.0: bridge window [mem 0xd0000000-0xd00fffff] [ 0.486374] pci 0000:03:01.0: bridge window [mem 0xff600000-0xff6fffff 64bit pref] [ 0.486406] pci 0000:03:02.0: PCI bridge to [bus 05-05] [ 0.486444] pci 0000:03:03.0: PCI bridge to [bus 06-06] [ 0.486479] pci 0000:02:00.0: PCI bridge to [bus 03-06] [ 0.486499] pci 0000:02:00.0: bridge window [io 0xd000-0xdfff] [ 0.486522] pci 0000:02:00.0: bridge window [mem 0xd0000000-0xd00fffff] [ 0.486545] pci 0000:02:00.0: bridge window [mem 0xff600000-0xff6fffff 64bit pref] [ 0.486575] pci 0000:00:1c.1: PCI bridge to [bus 02-06] [ 0.486593] pci 0000:00:1c.1: bridge window [io 0xd000-0xdfff] [ 0.486615] pci 0000:00:1c.1: bridge window [mem 0xd0000000-0xd00fffff] [ 0.486637] pci 0000:00:1c.1: bridge window [mem 0xff600000-0xff6fffff pref] [ 0.486710] pci 0000:00:1c.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.486735] pci 0000:00:1c.0: setting latency timer to 64 [ 0.486774] pci 0000:00:1c.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [ 0.486796] pci 0000:00:1c.1: setting latency timer to 64 [ 0.486817] pci 0000:02:00.0: setting latency timer to 64 [ 0.486836] pci 0000:03:01.0: setting latency timer to 64 [ 0.486858] pci 0000:03:02.0: setting latency timer to 64 [ 0.486880] pci 0000:03:03.0: setting latency timer to 64 [ 0.486893] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7] [ 0.486902] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff] [ 0.486912] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff] [ 0.486922] pci_bus 0000:00: resource 7 [mem 0x80000000-0xffffffff] [ 0.486932] pci_bus 0000:01: resource 0 [io 0xe000-0xefff] [ 0.486941] pci_bus 0000:01: resource 1 [mem 0xd0100000-0xd01fffff] [ 0.486951] pci_bus 0000:01: resource 2 [mem 0xff700000-0xff7fffff pref] [ 0.486961] pci_bus 0000:02: resource 0 [io 0xd000-0xdfff] [ 0.486970] pci_bus 0000:02: resource 1 [mem 0xd0000000-0xd00fffff] [ 0.486980] pci_bus 0000:02: resource 2 [mem 0xff600000-0xff6fffff pref] [ 0.486989] pci_bus 0000:03: resource 0 [io 0xd000-0xdfff] [ 0.486998] pci_bus 0000:03: resource 1 [mem 0xd0000000-0xd00fffff] [ 0.487008] pci_bus 0000:03: resource 2 [mem 0xff600000-0xff6fffff 64bit pref] [ 0.487018] pci_bus 0000:04: resource 0 [io 0xd000-0xdfff] [ 0.487028] pci_bus 0000:04: resource 1 [mem 0xd0000000-0xd00fffff] [ 0.487038] pci_bus 0000:04: resource 2 [mem 0xff600000-0xff6fffff 64bit pref] [ 0.487177] NET: Registered protocol family 2 [ 0.487405] IP route cache hash table entries: 32768 (order: 5, 131072 bytes) [ 0.488397] TCP established hash table entries: 131072 (order: 8, 1048576 bytes) [ 0.489792] TCP bind hash table entries: 65536 (order: 7, 524288 bytes) [ 0.490493] TCP: Hash tables configured (established 131072 bind 65536) [ 0.490525] TCP reno registered [ 0.490551] UDP hash table entries: 512 (order: 2, 16384 bytes) [ 0.490590] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes) [ 0.490898] NET: Registered protocol family 1 [ 0.490970] pci 0000:00:02.0: Boot video device [ 0.491052] pci 0000:00:1d.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20 [ 0.491092] pci 0000:00:1d.0: PCI INT A disabled [ 0.491134] pci 0000:00:1d.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 [ 0.491174] pci 0000:00:1d.1: PCI INT B disabled [ 0.491220] pci 0000:00:1d.2: PCI INT C -> GSI 22 (level, low) -> IRQ 22 [ 0.491259] pci 0000:00:1d.2: PCI INT C disabled [ 0.491307] pci 0000:00:1d.7: PCI INT D -> GSI 23 (level, low) -> IRQ 23 [ 0.864431] Freeing initrd memory: 13820k freed [ 2.088042] pci 0000:00:1d.7: EHCI: BIOS handoff failed (BIOS bug?) 01010001 [ 2.088207] pci 0000:00:1d.7: PCI INT D disabled [ 2.088267] PCI: CLS 64 bytes, default 64 [ 2.089248] audit: initializing netlink socket (disabled) [ 2.089287] type=2000 audit(1349363630.084:1): initialized [ 2.144783] highmem bounce pool size: 64 pages [ 2.144808] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 2.160057] VFS: Disk quotas dquot_6.5.2 [ 2.160232] Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) [ 2.161716] fuse init (API version 7.17) [ 2.161995] msgmni has been set to 1713 [ 2.162925] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 2.163008] io scheduler noop registered [ 2.163023] io scheduler deadline registered [ 2.163048] io scheduler cfq registered (default) [ 2.163339] pcieport 0000:00:1c.0: setting latency timer to 64 [ 2.163530] pcieport 0000:00:1c.1: setting latency timer to 64 [ 2.163706] pcieport 0000:02:00.0: setting latency timer to 64 [ 2.163873] pcieport 0000:03:01.0: setting latency timer to 64 [ 2.163964] pcieport 0000:03:01.0: irq 40 for MSI/MSI-X [ 2.164193] pcieport 0000:03:02.0: setting latency timer to 64 [ 2.164272] pcieport 0000:03:02.0: irq 41 for MSI/MSI-X [ 2.164453] pcieport 0000:03:03.0: setting latency timer to 64 [ 2.164531] pcieport 0000:03:03.0: irq 42 for MSI/MSI-X [ 2.164783] pcieport 0000:00:1c.0: Signaling PME through PCIe PME interrupt [ 2.164801] pci 0000:01:00.0: Signaling PME through PCIe PME interrupt [ 2.164816] pcie_pme 0000:00:1c.0:pcie01: service driver pcie_pme loaded [ 2.164853] pcieport 0000:00:1c.1: Signaling PME through PCIe PME interrupt [ 2.164867] pcieport 0000:02:00.0: Signaling PME through PCIe PME interrupt [ 2.164880] pcieport 0000:03:01.0: Signaling PME through PCIe PME interrupt [ 2.164892] pci 0000:04:00.0: Signaling PME through PCIe PME interrupt [ 2.164904] pcieport 0000:03:02.0: Signaling PME through PCIe PME interrupt [ 2.164917] pcieport 0000:03:03.0: Signaling PME through PCIe PME interrupt [ 2.164932] pcie_pme 0000:00:1c.1:pcie01: service driver pcie_pme loaded [ 2.164988] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 2.165115] pciehp 0000:00:1c.0:pcie04: HPC vendor_id 8086 device_id 8110 ss_vid 8086 ss_did 8119 [ 2.165177] pciehp 0000:00:1c.0:pcie04: service driver pciehp loaded [ 2.165199] pciehp 0000:00:1c.1:pcie04: HPC vendor_id 8086 device_id 8112 ss_vid 8086 ss_did 8119 [ 2.165260] pciehp 0000:00:1c.1:pcie04: service driver pciehp loaded [ 2.165290] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 2.165488] intel_idle: MWAIT substates: 0x3020220 [ 2.165508] intel_idle: v0.4 model 0x1C [ 2.165513] intel_idle: lapic_timer_reliable_states 0x2 [ 2.165519] Marking TSC unstable due to TSC halts in idle states deeper than C2 [ 2.165779] input: Lid Switch as /devices/LNXSYSTM:00/device:00/PNP0C0D:00/input/input0 [ 2.165855] ACPI: Lid Switch [LID] [ 2.165983] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input1 [ 2.166005] ACPI: Power Button [PWRB] [ 2.173811] thermal LNXTHERM:00: registered as thermal_zone0 [ 2.173829] ACPI: Thermal Zone [TZ00] (48 C) [ 2.174004] thermal LNXTHERM:01: registered as thermal_zone1 [ 2.174018] ACPI: Thermal Zone [TZ01] (34 C) [ 2.174194] thermal LNXTHERM:02: registered as thermal_zone2 [ 2.174207] ACPI: Thermal Zone [TZ02] (34 C) [ 2.174378] thermal LNXTHERM:03: registered as thermal_zone3 [ 2.174392] ACPI: Thermal Zone [TZ03] (34 C) [ 2.174503] ERST: Table is not found! [ 2.174513] GHES: HEST is not enabled! [ 2.174601] isapnp: Scanning for PnP cards... [ 2.176175] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled [ 2.196702] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.292409] serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 2.528909] isapnp: No Plug & Play device found [ 2.588733] 00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.624523] 00:06: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 2.640702] Linux agpgart interface v0.103 [ 2.645138] brd: module loaded [ 2.647452] loop: module loaded [ 2.648149] pata_acpi 0000:00:1f.1: setting latency timer to 64 [ 2.649238] Fixed MDIO Bus: probed [ 2.649315] tun: Universal TUN/TAP device driver, 1.6 [ 2.649327] tun: (C) 1999-2004 Max Krasnyansky <[email protected]> [ 2.649524] PPP generic driver version 2.4.2 [ 2.649824] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 2.649884] ehci_hcd 0000:00:1d.7: PCI INT D -> GSI 23 (level, low) -> IRQ 23 [ 2.649937] ehci_hcd 0000:00:1d.7: setting latency timer to 64 [ 2.649946] ehci_hcd 0000:00:1d.7: EHCI Host Controller [ 2.650082] ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 1 [ 2.650148] ehci_hcd 0000:00:1d.7: debug port 1 [ 2.654045] ehci_hcd 0000:00:1d.7: cache line size of 64 is not supported [ 2.654093] ehci_hcd 0000:00:1d.7: irq 23, io mem 0xd02c4000 [ 2.668035] ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00 [ 2.668392] hub 1-0:1.0: USB hub found [ 2.668413] hub 1-0:1.0: 8 ports detected [ 2.668618] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 2.668666] uhci_hcd: USB Universal Host Controller Interface driver [ 2.668726] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20 [ 2.668751] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 2.668759] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 2.668910] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 2 [ 2.668981] uhci_hcd 0000:00:1d.0: irq 20, io base 0x0000f040 [ 2.669335] hub 2-0:1.0: USB hub found [ 2.669355] hub 2-0:1.0: 2 ports detected [ 2.669508] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 [ 2.669531] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 2.669538] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 2.669675] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 3 [ 2.669739] uhci_hcd 0000:00:1d.1: irq 21, io base 0x0000f020 [ 2.670099] hub 3-0:1.0: USB hub found [ 2.670118] hub 3-0:1.0: 2 ports detected [ 2.670271] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 22 (level, low) -> IRQ 22 [ 2.670295] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 2.670302] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 2.670435] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 4 [ 2.670502] uhci_hcd 0000:00:1d.2: irq 22, io base 0x0000f000 [ 2.670869] hub 4-0:1.0: USB hub found [ 2.670888] hub 4-0:1.0: 2 ports detected [ 2.671186] usbcore: registered new interface driver libusual [ 2.671332] i8042: PNP: PS/2 Controller [PNP0303:PS2K,PNP0f03:PS2M] at 0x60,0x64 irq 1,12 [ 2.673408] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 2.673437] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 2.673844] mousedev: PS/2 mouse device common for all mice [ 2.674272] rtc_cmos 00:08: RTC can wake from S4 [ 2.674482] rtc_cmos 00:08: rtc core: registered rtc_cmos as rtc0 [ 2.674529] rtc0: alarms up to one year, y3k, 242 bytes nvram, hpet irqs [ 2.674691] device-mapper: uevent: version 1.0.3 [ 2.674903] device-mapper: ioctl: 4.22.0-ioctl (2011-10-19) initialised: [email protected] [ 2.675024] EISA: Probing bus 0 at eisa.0 [ 2.675037] EISA: Cannot allocate resource for mainboard [ 2.675050] Cannot allocate resource for EISA slot 1 [ 2.675061] Cannot allocate resource for EISA slot 2 [ 2.675072] Cannot allocate resource for EISA slot 3 [ 2.675083] Cannot allocate resource for EISA slot 4 [ 2.675094] Cannot allocate resource for EISA slot 5 [ 2.675105] Cannot allocate resource for EISA slot 6 [ 2.675116] Cannot allocate resource for EISA slot 7 [ 2.675127] Cannot allocate resource for EISA slot 8 [ 2.675137] EISA: Detected 0 cards. [ 2.675161] cpufreq-nforce2: No nForce2 chipset. [ 2.675401] cpuidle: using governor ladder [ 2.675786] cpuidle: using governor menu [ 2.675797] EFI Variables Facility v0.08 2004-May-17 [ 2.676429] TCP cubic registered [ 2.676751] NET: Registered protocol family 10 [ 2.678031] NET: Registered protocol family 17 [ 2.678052] Registering the dns_resolver key type [ 2.678107] Using IPI No-Shortcut mode [ 2.678515] PM: Hibernation image not present or could not be loaded. [ 2.678543] registered taskstats version 1 [ 2.701145] Magic number: 0:84:234 [ 2.701312] rtc_cmos 00:08: setting system clock to 2012-10-04 15:13:51 UTC (1349363631) [ 2.702280] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found [ 2.702294] EDD information not available. [ 2.702858] Freeing unused kernel memory: 740k freed [ 2.703630] Write protecting the kernel text: 5816k [ 2.703692] Write protecting the kernel read-only data: 2376k [ 2.703706] NX-protecting the kernel data: 4424k [ 2.751226] udevd[84]: starting version 175 [ 2.980162] usb 1-1: new high-speed USB device number 2 using ehci_hcd [ 3.001394] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded [ 3.001474] r8169 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 3.001554] r8169 0000:01:00.0: setting latency timer to 64 [ 3.001654] r8169 0000:01:00.0: irq 43 for MSI/MSI-X [ 3.004220] r8169 0000:01:00.0: eth0: RTL8168c/8111c at 0xf8416000, 00:18:92:03:10:46, XID 1c4000c0 IRQ 43 [ 3.004254] r8169 0000:01:00.0: eth0: jumbo features [frames: 6128 bytes, tx checksumming: ko] [ 3.004347] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded [ 3.005085] r8169 0000:04:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 [ 3.005182] r8169 0000:04:00.0: setting latency timer to 64 [ 3.005292] r8169 0000:04:00.0: irq 44 for MSI/MSI-X [ 3.007187] r8169 0000:04:00.0: eth1: RTL8168c/8111c at 0xf8418000, 00:18:92:03:10:47, XID 1c4000c0 IRQ 44 [ 3.007224] r8169 0000:04:00.0: eth1: jumbo features [frames: 6128 bytes, tx checksumming: ko] [ 3.034417] pata_sch 0000:00:1f.1: version 0.2 [ 3.034518] pata_sch 0000:00:1f.1: setting latency timer to 64 [ 3.036698] scsi0 : pata_sch [ 3.039842] scsi1 : pata_sch [ 3.040913] ata1: PATA max UDMA/100 cmd 0x1f0 ctl 0x3f6 bmdma 0xf060 irq 14 [ 3.040940] ata2: PATA max UDMA/100 cmd 0x170 ctl 0x376 bmdma 0xf068 irq 15 [ 3.131850] Initializing USB Mass Storage driver... [ 3.136405] scsi2 : usb-storage 1-1:1.0 [ 3.136642] usbcore: registered new interface driver usb-storage [ 3.136656] USB Mass Storage support registered. [ 3.524465] usb 3-1: new low-speed USB device number 2 using uhci_hcd [ 3.968144] usb 3-2: new full-speed USB device number 3 using uhci_hcd [ 4.137903] scsi 2:0:0:0: Direct-Access TS TS4GUFM-H 1100 PQ: 0 ANSI: 0 CCS [ 4.140067] sd 2:0:0:0: Attached scsi generic sg0 type 0 [ 4.140590] sd 2:0:0:0: [sda] 8028160 512-byte logical blocks: (4.11 GB/3.82 GiB) [ 4.141597] sd 2:0:0:0: [sda] Write Protect is off [ 4.141618] sd 2:0:0:0: [sda] Mode Sense: 43 00 00 00 [ 4.142974] sd 2:0:0:0: [sda] No Caching mode page present [ 4.143000] sd 2:0:0:0: [sda] Assuming drive cache: write through [ 4.145837] sd 2:0:0:0: [sda] No Caching mode page present [ 4.145858] sd 2:0:0:0: [sda] Assuming drive cache: write through [ 4.147931] sda: sda1 sda2 < sda5 > [ 4.150972] sd 2:0:0:0: [sda] No Caching mode page present [ 4.151001] sd 2:0:0:0: [sda] Assuming drive cache: write through [ 4.151023] sd 2:0:0:0: [sda] Attached SCSI disk [ 4.249168] input: HID 046a:004b as /devices/pci0000:00/0000:00:1d.1/usb3/3-1/3-1:1.0/input/input2 [ 4.249579] generic-usb 0003:046A:004B.0001: input,hidraw0: USB HID v1.11 Keyboard [HID 046a:004b] on usb-0000:00:1d.1-1/input0 [ 4.287805] input: HID 046a:004b as /devices/pci0000:00/0000:00:1d.1/usb3/3-1/3-1:1.1/input/input3 [ 4.289235] generic-usb 0003:046A:004B.0002: input,hidraw1: USB HID v1.11 Mouse [HID 046a:004b] on usb-0000:00:1d.1-1/input1 [ 4.297604] input: EloTouchSystems,Inc Elo TouchSystems 2216 AccuTouch\xffffffc2\xffffffae\xffffffae USB Touchmonitor Interface as /devices/pci0000:00/0000:00:1d.1/usb3/3-2/3-2:1.0/input/input4 [ 4.298913] generic-usb 0003:04E7:0050.0003: input,hidraw2: USB HID v1.00 Pointer [EloTouchSystems,Inc Elo TouchSystems 2216 AccuTouch\xffffffc2\xffffffae\xffffffae USB Touchmonitor Interface] on usb-0000:00:1d.1-2/input0 [ 4.299878] usbcore: registered new interface driver usbhid [ 4.299925] usbhid: USB HID core driver [ 4.352639] EXT4-fs (sda1): INFO: recovery required on readonly filesystem [ 4.352661] EXT4-fs (sda1): write access will be enabled during recovery [ 8.519257] EXT4-fs (sda1): recovery complete [ 8.564389] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) [ 14.280922] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 14.280944] ADDRCONF(NETDEV_UP): eth1: link is not ready [ 14.310368] udevd[308]: starting version 175 [ 14.353873] Adding 1045500k swap on /dev/sda5. Priority:-1 extents:1 across:1045500k [ 14.428718] lp: driver loaded but no devices found [ 14.521667] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro [ 15.073459] [drm] Initialized drm 1.1.0 20060810 [ 15.097073] psb_gfx: module is from the staging directory, the quality is unknown, you have been warned. [ 15.180630] gma500 0000:00:02.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 15.180648] gma500 0000:00:02.0: setting latency timer to 64 [ 15.182117] Stolen memory information [ 15.182127] base in RAM: 0x7f800000 [ 15.182134] size: 7932K, calculated by (GTT RAM base) - (Stolen base), seems wrong [ 15.182143] the correct size should be: 8M(dvmt mode=3) [ 15.234889] Set up 1983 stolen pages starting at 0x7f800000, GTT offset 0K [ 15.235126] [drm] SGX core id = 0x01130000 [ 15.235135] [drm] SGX core rev major = 0x01, minor = 0x02 [ 15.235143] [drm] SGX core rev maintenance = 0x01, designer = 0x00 [ 15.268796] [Firmware Bug]: ACPI: No _BQC method, cannot determine initial brightness [ 15.269888] acpi device:04: registered as cooling_device2 [ 15.270568] acpi device:05: registered as cooling_device3 [ 15.270947] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:00/LNXVIDEO:00/input/input5 [ 15.271238] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) [ 15.271424] [drm] Supports vblank timestamp caching Rev 1 (10.10.2010). [ 15.271434] [drm] No driver support for vblank timestamp query. [ 15.374694] type=1400 audit(1349363644.167:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=435 comm="apparmor_parser" [ 15.385518] type=1400 audit(1349363644.179:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=435 comm="apparmor_parser" [ 15.386369] type=1400 audit(1349363644.179:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=435 comm="apparmor_parser" [ 15.677514] r8169 0000:01:00.0: eth0: link down [ 15.694828] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 16.537490] gma500 0000:00:02.0: allocated 800x480 fb [ 16.558066] fbcon: psbfb (fb0) is primary device [ 16.747122] gma500 0000:00:02.0: BL bug: Reg 00000000 save 00000000 [ 16.775550] Console: switching to colour frame buffer device 100x30 [ 16.781804] fb0: psbfb frame buffer device [ 16.781812] drm: registered panic notifier [ 16.870168] [drm] Initialized gma500 1.0.0 2011-06-06 for 0000:00:02.0 on minor 0 [ 16.871166] snd_hda_intel 0000:00:1b.0: power state changed by ACPI to D0 [ 16.871186] snd_hda_intel 0000:00:1b.0: power state changed by ACPI to D0 [ 16.871207] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 16.871284] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [ 29.338953] r8169 0000:01:00.0: eth0: link up [ 29.339471] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 31.427223] init: failsafe main process (675) killed by TERM signal [ 31.522411] type=1400 audit(1349363660.316:5): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=889 comm="apparmor_parser" [ 31.523956] type=1400 audit(1349363660.316:6): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=889 comm="apparmor_parser" [ 31.524882] type=1400 audit(1349363660.320:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=889 comm="apparmor_parser" [ 31.525940] type=1400 audit(1349363660.320:8): apparmor="STATUS" operation="profile_load" name="/usr/sbin/tcpdump" pid=891 comm="apparmor_parser" [ 34.526445] postgres (1003): /proc/1003/oom_adj is deprecated, please use /proc/1003/oom_score_adj instead. [ 40.144048] eth0: no IPv6 routers present

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • web.xml not reloading in tomcat even after stop/start

    - by ajay
    This is in relation to:- http://stackoverflow.com/questions/2576514/basic-tomcat-servlet-error I changed my web.xml file, did ant compile , all, /etc/init.d/tomcat stop , start Even then my web.xml file in tomcat deployment is still unchanged. This is build.properties file:- app.name=hello catalina.home=/usr/local/tomcat manager.username=admin manager.password=admin This is my build.xml file. Is there something wrong with this:- <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- General purpose build script for web applications and web services, including enhanced support for deploying directly to a Tomcat 6 based server. This build script assumes that the source code of your web application is organized into the following subdirectories underneath the source code directory from which you execute the build script: docs Static documentation files to be copied to the "docs" subdirectory of your distribution. src Java source code (and associated resource files) to be compiled to the "WEB-INF/classes" subdirectory of your web applicaiton. web Static HTML, JSP, and other content (such as image files), including the WEB-INF subdirectory and its configuration file contents. $Id: build.xml.txt 562814 2007-08-05 03:52:04Z markt $ --> <!-- A "project" describes a set of targets that may be requested when Ant is executed. The "default" attribute defines the target which is executed if no specific target is requested, and the "basedir" attribute defines the current working directory from which Ant executes the requested task. This is normally set to the current working directory. --> <project name="My Project" default="compile" basedir="."> <!-- ===================== Property Definitions =========================== --> <!-- Each of the following properties are used in the build script. Values for these properties are set by the first place they are defined, from the following list: * Definitions on the "ant" command line (ant -Dfoo=bar compile). * Definitions from a "build.properties" file in the top level source directory of this application. * Definitions from a "build.properties" file in the developer's home directory. * Default definitions in this build.xml file. You will note below that property values can be composed based on the contents of previously defined properties. This is a powerful technique that helps you minimize the number of changes required when your development environment is modified. Note that property composition is allowed within "build.properties" files as well as in the "build.xml" script. --> <property file="build.properties"/> <property file="${user.home}/build.properties"/> <!-- ==================== File and Directory Names ======================== --> <!-- These properties generally define file and directory names (or paths) that affect where the build process stores its outputs. app.name Base name of this application, used to construct filenames and directories. Defaults to "myapp". app.path Context path to which this application should be deployed (defaults to "/" plus the value of the "app.name" property). app.version Version number of this iteration of the application. build.home The directory into which the "prepare" and "compile" targets will generate their output. Defaults to "build". catalina.home The directory in which you have installed a binary distribution of Tomcat 6. This will be used by the "deploy" target. dist.home The name of the base directory in which distribution files are created. Defaults to "dist". manager.password The login password of a user that is assigned the "manager" role (so that he or she can execute commands via the "/manager" web application) manager.url The URL of the "/manager" web application on the Tomcat installation to which we will deploy web applications and web services. manager.username The login username of a user that is assigned the "manager" role (so that he or she can execute commands via the "/manager" web application) --> <property name="app.name" value="myapp"/> <property name="app.path" value="/${app.name}"/> <property name="app.version" value="0.1-dev"/> <property name="build.home" value="${basedir}/build"/> <property name="catalina.home" value="../../../.."/> <!-- UPDATE THIS! --> <property name="dist.home" value="${basedir}/dist"/> <property name="docs.home" value="${basedir}/docs"/> <property name="manager.url" value="http://localhost:8080/manager"/> <property name="src.home" value="${basedir}/src"/> <property name="web.home" value="${basedir}/web"/> <!-- ==================== External Dependencies =========================== --> <!-- Use property values to define the locations of external JAR files on which your application will depend. In general, these values will be used for two purposes: * Inclusion on the classpath that is passed to the Javac compiler * Being copied into the "/WEB-INF/lib" directory during execution of the "deploy" target. Because we will automatically include all of the Java classes that Tomcat 6 exposes to web applications, we will not need to explicitly list any of those dependencies. You only need to worry about external dependencies for JAR files that you are going to include inside your "/WEB-INF/lib" directory. --> <!-- Dummy external dependency --> <!-- <property name="foo.jar" value="/path/to/foo.jar"/> --> <!-- ==================== Compilation Classpath =========================== --> <!-- Rather than relying on the CLASSPATH environment variable, Ant includes features that makes it easy to dynamically construct the classpath you need for each compilation. The example below constructs the compile classpath to include the servlet.jar file, as well as the other components that Tomcat makes available to web applications automatically, plus anything that you explicitly added. --> <path id="compile.classpath"> <!-- Include all JAR files that will be included in /WEB-INF/lib --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> <!-- <pathelement location="${foo.jar}"/> --> <!-- Include all elements that Tomcat exposes to applications --> <fileset dir="${catalina.home}/bin"> <include name="*.jar"/> </fileset> <pathelement location="${catalina.home}/lib"/> <fileset dir="${catalina.home}/lib"> <include name="*.jar"/> </fileset> </path> <!-- ================== Custom Ant Task Definitions ======================= --> <!-- These properties define custom tasks for the Ant build tool that interact with the "/manager" web application installed with Tomcat 6. Before they can be successfully utilized, you must perform the following steps: - Copy the file "lib/catalina-ant.jar" from your Tomcat 6 installation into the "lib" directory of your Ant installation. - Create a "build.properties" file in your application's top-level source directory (or your user login home directory) that defines appropriate values for the "manager.password", "manager.url", and "manager.username" properties described above. For more information about the Manager web application, and the functionality of these tasks, see <http://localhost:8080/tomcat-docs/manager-howto.html>. --> <taskdef resource="org/apache/catalina/ant/catalina.tasks" classpathref="compile.classpath"/> <!-- ==================== Compilation Control Options ==================== --> <!-- These properties control option settings on the Javac compiler when it is invoked using the <javac> task. compile.debug Should compilation include the debug option? compile.deprecation Should compilation include the deprecation option? compile.optimize Should compilation include the optimize option? --> <property name="compile.debug" value="true"/> <property name="compile.deprecation" value="false"/> <property name="compile.optimize" value="true"/> <!-- ==================== All Target ====================================== --> <!-- The "all" target is a shortcut for running the "clean" target followed by the "compile" target, to force a complete recompile. --> <target name="all" depends="clean,compile" description="Clean build and dist directories, then compile"/> <!-- ==================== Clean Target ==================================== --> <!-- The "clean" target deletes any previous "build" and "dist" directory, so that you can be ensured the application can be built from scratch. --> <target name="clean" description="Delete old build and dist directories"> <delete dir="${build.home}"/> <delete dir="${dist.home}"/> </target> <!-- ==================== Compile Target ================================== --> <!-- The "compile" target transforms source files (from your "src" directory) into object files in the appropriate location in the build directory. This example assumes that you will be including your classes in an unpacked directory hierarchy under "/WEB-INF/classes". --> <target name="compile" depends="prepare" description="Compile Java sources"> <!-- Compile Java classes as necessary --> <mkdir dir="${build.home}/WEB-INF/classes"/> <javac srcdir="${src.home}" destdir="${build.home}/WEB-INF/classes" debug="${compile.debug}" deprecation="${compile.deprecation}" optimize="${compile.optimize}"> <classpath refid="compile.classpath"/> </javac> <!-- Copy application resources --> <copy todir="${build.home}/WEB-INF/classes"> <fileset dir="${src.home}" excludes="**/*.java"/> </copy> </target> <!-- ==================== Dist Target ===================================== --> <!-- The "dist" target creates a binary distribution of your application in a directory structure ready to be archived in a tar.gz or zip file. Note that this target depends on two others: * "compile" so that the entire web application (including external dependencies) will have been assembled * "javadoc" so that the application Javadocs will have been created --> <target name="dist" depends="compile,javadoc" description="Create binary distribution"> <!-- Copy documentation subdirectories --> <mkdir dir="${dist.home}/docs"/> <copy todir="${dist.home}/docs"> <fileset dir="${docs.home}"/> </copy> <!-- Create application JAR file --> <jar jarfile="${dist.home}/${app.name}-${app.version}.war" basedir="${build.home}"/> <!-- Copy additional files to ${dist.home} as necessary --> </target> <!-- ==================== Install Target ================================== --> <!-- The "install" target tells the specified Tomcat 6 installation to dynamically install this web application and make it available for execution. It does *not* cause the existence of this web application to be remembered across Tomcat restarts; if you restart the server, you will need to re-install all this web application. If you have already installed this application, and simply want Tomcat to recognize that you have updated Java classes (or the web.xml file), use the "reload" target instead. NOTE: This target will only succeed if it is run from the same server that Tomcat is running on. NOTE: This is the logical opposite of the "remove" target. --> <target name="install" depends="compile" description="Install application to servlet container"> <deploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}" localWar="file://${build.home}"/> </target> <!-- ==================== Javadoc Target ================================== --> <!-- The "javadoc" target creates Javadoc API documentation for the Java classes included in your application. Normally, this is only required when preparing a distribution release, but is available as a separate target in case the developer wants to create Javadocs independently. --> <target name="javadoc" depends="compile" description="Create Javadoc API documentation"> <mkdir dir="${dist.home}/docs/api"/> <javadoc sourcepath="${src.home}" destdir="${dist.home}/docs/api" packagenames="*"> <classpath refid="compile.classpath"/> </javadoc> </target> <!-- ====================== List Target =================================== --> <!-- The "list" target asks the specified Tomcat 6 installation to list the currently running web applications, either loaded at startup time or installed dynamically. It is useful to determine whether or not the application you are currently developing has been installed. --> <target name="list" description="List installed applications on servlet container"> <list url="${manager.url}" username="${manager.username}" password="${manager.password}"/> </target> <!-- ==================== Prepare Target ================================== --> <!-- The "prepare" target is used to create the "build" destination directory, and copy the static contents of your web application to it. If you need to copy static files from external dependencies, you can customize the contents of this task. Normally, this task is executed indirectly when needed. --> <target name="prepare"> <!-- Create build directories as needed --> <mkdir dir="${build.home}"/> <mkdir dir="${build.home}/WEB-INF"/> <mkdir dir="${build.home}/WEB-INF/classes"/> <!-- Copy static content of this web application --> <copy todir="${build.home}"> <fileset dir="${web.home}"/> </copy> <!-- Copy external dependencies as required --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> <mkdir dir="${build.home}/WEB-INF/lib"/> <!-- <copy todir="${build.home}/WEB-INF/lib" file="${foo.jar}"/> --> <!-- Copy static files from external dependencies as needed --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> </target> <!-- ==================== Reload Target =================================== --> <!-- The "reload" signals the specified application Tomcat 6 to shut itself down and reload. This can be useful when the web application context is not reloadable and you have updated classes or property files in the /WEB-INF/classes directory or when you have added or updated jar files in the /WEB-INF/lib directory. NOTE: The /WEB-INF/web.xml web application configuration file is not reread on a reload. If you have made changes to your web.xml file you must stop then start the web application. --> <target name="reload" depends="compile" description="Reload application on servlet container"> <reload url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> <!-- ==================== Remove Target =================================== --> <!-- The "remove" target tells the specified Tomcat 6 installation to dynamically remove this web application from service. NOTE: This is the logical opposite of the "install" target. --> <target name="remove" description="Remove application on servlet container"> <undeploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> </project>

    Read the article

  • MS Dynamics CRM trapping .NET error before I can handle it

    - by clifgriffin
    This is a fun one. I have written a custom search page that provides faster, more user friendly searches than the default Contacts view and also allows searching of Leads and Contacts simultaneously. It uses GridViews bound to SqlDataSources that query filtered views. I'm sure someone will complain that I'm not using the web services for this, but this is just the design decision we made. These GridViews live in UpdatePanels to enable very slick AJAX updates upon search. It's all working great. Nearly ready to be deployed, except for one thing: Some long running searches are triggering an uncatchable SQL timeout exception. [SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.] at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) at System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable) at System.Web.UI.WebControls.SqlDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) at System.Web.UI.DataSourceView.Select(DataSourceSelectArguments arguments, DataSourceViewSelectCallback callback) at System.Web.UI.WebControls.DataBoundControl.PerformSelect() at System.Web.UI.WebControls.BaseDataBoundControl.DataBind() at System.Web.UI.WebControls.GridView.DataBind() at System.Web.UI.WebControls.BaseDataBoundControl.EnsureDataBound() at System.Web.UI.WebControls.CompositeDataBoundControl.CreateChildControls() at System.Web.UI.Control.EnsureChildControls() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) I found that CRM is doing a server.transfer to capture this error because my UpdatePanels started throwing JavaSript errors when this error would occur. I was only able to get the full error message by using the JavaScript debugger in IE. Having found this error, I thought the solution would be simple. I just needed to wrap my databind calls in try/catch blocks to capture any errors. Unfortunately it seems CRM's IIS configuration has the magic ability to capture this error before it ever gets back to my code. Using the debugger I never see it. It never gets to my catch blocks, but it's clearly happening in the SQL Data Source which is clearly (by the stack trace) being triggered by my GridView bind. Any ideas on this? It's driving me crazy. Code Behind (with some irrelevant functions omitted): protected void Page_Load(object sender, EventArgs e) { //Initialize some stuff this.bannerOracle = new OdbcConnection(ConfigurationManager.ConnectionStrings["OracleConnectionString"].ConnectionString); //Prospect default HideProspects(); HideProspectAddressColumn(); //Contacts default HideContactAddressColumn(); //Default error messages gvContacts.EmptyDataText = "Sad day. Your search returned no contacts."; gvProspects.EmptyDataText = "Sad day. Your search returned no prospects."; //New search try { SearchContact(null, -1); } catch { gvContacts.EmptyDataText = "Oops! An error occured. This may have been a timeout. Please try your search again."; gvContacts.DataSource = null; gvContacts.DataBind(); } } protected void txtSearchString_TextChanged(object sender, EventArgs e) { if(!String.IsNullOrEmpty(txtSearchString.Text)) { try { SearchContact(txtSearchString.Text, Convert.ToInt16(lstSearchType.SelectedValue)); } catch { gvContacts.EmptyDataText = "Oops! An error occured. This may have been a timeout. Please try your search again."; gvContacts.DataSource = null; gvContacts.DataBind(); } if (chkProspects.Checked == true) { try { SearchProspect(txtSearchString.Text, Convert.ToInt16(lstSearchType.SelectedValue)); } catch { gvProspects.EmptyDataText = "Oops! An error occured. This may have been a timeout. Please try your search again."; gvProspects.DataSource = null; gvProspects.DataBind(); } finally { ShowProspects(); } } else { HideProspects(); } } } protected void SearchContact(string search, int type) { SqlCRM_Contact.ConnectionString = ConfigurationManager.ConnectionStrings["MSSQLConnectionString"].ConnectionString; gvContacts.DataSourceID = "SqlCRM_Contact"; string strQuery = ""; string baseQuery = @"SELECT filteredcontact.contactid, filteredcontact.new_libertyid, filteredcontact.fullname, 'none' AS line1, filteredcontact.emailaddress1, filteredcontact.telephone1, filteredcontact.birthdateutc AS birthdate, filteredcontact.gendercodename FROM filteredcontact "; switch(type) { case LASTFIRST: strQuery = baseQuery + "WHERE fullname LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LAST: strQuery = baseQuery + "WHERE lastname LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case FIRST: strQuery = baseQuery + "WHERE firstname LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LIBERTYID: strQuery = baseQuery + "WHERE new_libertyid LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case EMAIL: strQuery = baseQuery + "WHERE emailaddress1 LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case TELEPHONE: strQuery = baseQuery + "WHERE telephone1 LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case BIRTHDAY: strQuery = baseQuery + "WHERE filteredcontact.birthdateutc BETWEEN @dateStart AND @dateEnd AND filteredcontact.statecode = 0"; try { DateTime temp = DateTime.Parse(search); if (temp.Year < 1753 || temp.Year > 9999) { search = string.Empty; } else { search = temp.ToString("yyyy-MM-dd"); } } catch { search = string.Empty; } SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("dateStart", DbType.String, search.Trim() + " 00:00:00.000"); SqlCRM_Contact.SelectParameters.Add("dateEnd", DbType.String, search.Trim() + " 23:59:59.999"); break; case SSN: //Do something break; case ADDRESS: strQuery = @"SELECT contactid, new_libertyid, fullname, line1, emailaddress1, telephone1, birthdate, gendercodename FROM (SELECT FC.contactid, FC.new_libertyid, FC.fullname, FA.line1, FC.emailaddress1, FC.telephone1, FC.birthdateutc AS birthdate, FC.gendercodename, ROW_NUMBER() OVER(PARTITION BY FC.contactid ORDER BY FC.contactid DESC) AS rn FROM filteredcontact FC INNER JOIN FilteredCustomerAddress FA ON FC.contactid = FA.parentid WHERE FA.line1 LIKE @value AND FA.addressnumber <> 1 AND FC.statecode = 0 ) AS RESULTS WHERE rn = 1"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); ShowContactAddressColumn(); break; default: strQuery = @"SELECT TOP 500 filteredcontact.contactid, filteredcontact.new_libertyid, filteredcontact.fullname, 'none' AS line1, filteredcontact.emailaddress1, filteredcontact.telephone1, filteredcontact.birthdateutc AS birthdate, filteredcontact.gendercodename FROM filteredcontact WHERE filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; break; } if (type != ADDRESS) { HideContactAddressColumn(); } gvContacts.PageIndex = 0; //try //{ // SqlCRM_Contact.DataBind(); //} //catch //{ // SqlCRM_Contact.DataBind(); //} gvContacts.DataBind(); } protected void SearchProspect(string search, int type) { SqlCRM_Prospect.ConnectionString = ConfigurationManager.ConnectionStrings["MSSQLConnectionString"].ConnectionString; gvProspects.DataSourceID = "SqlCRM_Prospect"; string strQuery = ""; string baseQuery = @"SELECT filteredlead.leadid, filteredlead.fullname, 'none' AS address1_line1, filteredlead.emailaddress1, filteredlead.telephone1, filteredlead.lu_dateofbirthutc AS lu_dateofbirth, filteredlead.lu_gendername FROM filteredlead "; switch (type) { case LASTFIRST: strQuery = baseQuery + "WHERE fullname LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LAST: strQuery = baseQuery + "WHERE lastname LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case FIRST: strQuery = baseQuery + "WHERE firstname LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LIBERTYID: strQuery = baseQuery + "WHERE new_libertyid LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case EMAIL: strQuery = baseQuery + "WHERE emailaddress1 LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case TELEPHONE: strQuery = baseQuery + "WHERE telephone1 LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case BIRTHDAY: strQuery = baseQuery + "WHERE filteredlead.lu_dateofbirth BETWEEN @dateStart AND @dateEnd AND filteredlead.statecode = 0"; try { DateTime temp = DateTime.Parse(search); if (temp.Year < 1753 || temp.Year > 9999) { search = string.Empty; } else { search = temp.ToString("yyyy-MM-dd"); } } catch { search = string.Empty; } SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("dateStart", DbType.String, search.Trim() + " 00:00:00.000"); SqlCRM_Prospect.SelectParameters.Add("dateEnd", DbType.String, search.Trim() + " 23:59:59.999"); break; case SSN: //Do nothing break; case ADDRESS: strQuery = @"SELECT filteredlead.leadid, filteredlead.fullname, filteredlead.address1_line1, filteredlead.emailaddress1, filteredlead.telephone1, filteredlead.lu_dateofbirthutc AS lu_dateofbirth, filteredlead.lu_gendername FROM filteredlead WHERE address1_line1 LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); ShowProspectAddressColumn(); break; default: strQuery = @"SELECT TOP 500 filteredlead.leadid, filteredlead.fullname, 'none' AS address1_line1 filteredlead.emailaddress1, filteredlead.telephone1, filteredlead.lu_dateofbirthutc AS lu_dateofbirth, filteredlead.lu_gendername FROM filteredlead WHERE filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; break; } if (type != ADDRESS) { HideProspectAddressColumn(); } gvProspects.PageIndex = 0; //try //{ // SqlCRM_Prospect.DataBind(); //} //catch (Exception ex) //{ // SqlCRM_Prospect.DataBind(); //} gvProspects.DataBind(); }

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • Hosting the Razor Engine for Templating in Non-Web Applications

    - by Rick Strahl
    Microsoft’s new Razor HTML Rendering Engine that is currently shipping with ASP.NET MVC previews can be used outside of ASP.NET. Razor is an alternative view engine that can be used instead of the ASP.NET Page engine that currently works with ASP.NET WebForms and MVC. It provides a simpler and more readable markup syntax and is much more light weight in terms of functionality than the full blown WebForms Page engine, focusing only on features that are more along the lines of a pure view engine (or classic ASP!) with focus on expression and code rendering rather than a complex control/object model. Like the Page engine though, the parser understands .NET code syntax which can be embedded into templates, and behind the scenes the engine compiles markup and script code into an executing piece of .NET code in an assembly. Although it ships as part of the ASP.NET MVC and WebMatrix the Razor Engine itself is not directly dependent on ASP.NET or IIS or HTTP in any way. And although there are some markup and rendering features that are optimized for HTML based output generation, Razor is essentially a free standing template engine. And what’s really nice is that unlike the ASP.NET Runtime, Razor is fairly easy to host inside of your own non-Web applications to provide templating functionality. Templating in non-Web Applications? Yes please! So why might you host a template engine in your non-Web application? Template rendering is useful in many places and I have a number of applications that make heavy use of it. One of my applications – West Wind Html Help Builder - exclusively uses template based rendering to merge user supplied help text content into customizable and executable HTML markup templates that provide HTML output for CHM style HTML Help. This is an older product and it’s not actually using .NET at the moment – and this is one reason I’m looking at Razor for script hosting at the moment. For a few .NET applications though I’ve actually used the ASP.NET Runtime hosting to provide templating and mail merge style functionality and while that works reasonably well it’s a very heavy handed approach. It’s very resource intensive and has potential issues with versioning in various different versions of .NET. The generic implementation I created in the article above requires a lot of fix up to mimic an HTTP request in a non-HTTP environment and there are a lot of little things that have to happen to ensure that the ASP.NET runtime works properly most of it having nothing to do with the templating aspect but just satisfying ASP.NET’s requirements. The Razor Engine on the other hand is fairly light weight and completely decoupled from the ASP.NET runtime and the HTTP processing. Rather it’s a pure template engine whose sole purpose is to render text templates. Hosting this engine in your own applications can be accomplished with a reasonable amount of code (actually just a few lines with the tools I’m about to describe) and without having to fake HTTP requests. It’s also much lighter on resource usage and you can easily attach custom properties to your base template implementation to easily pass context from the parent application into templates all of which was rather complicated with ASP.NET runtime hosting. Installing the Razor Template Engine You can get Razor as part of the MVC 3 (RC and later) or Web Matrix. Both are available as downloadable components from the Web Platform Installer Version 3.0 (!important – V2 doesn’t show these components). If you already have that version of the WPI installed just fire it up. You can get the latest version of the Web Platform Installer from here: http://www.microsoft.com/web/gallery/install.aspx Once the platform Installer 3.0 is installed install either MVC 3 or ASP.NET Web Pages. Once installed you’ll find a System.Web.Razor assembly in C:\Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies\System.Web.Razor.dll which you can add as a reference to your project. Creating a Wrapper The basic Razor Hosting API is pretty simple and you can host Razor with a (large-ish) handful of lines of code. I’ll show the basics of it later in this article. However, if you want to customize the rendering and handle assembly and namespace includes for the markup as well as deal with text and file inputs as well as forcing Razor to run in a separate AppDomain so you can unload the code-generated assemblies and deal with assembly caching for re-used templates little more work is required to create something that is more easily reusable. For this reason I created a Razor Hosting wrapper project that combines a bunch of this functionality into an easy to use hosting class, a hosting factory that can load the engine in a separate AppDomain and a couple of hosting containers that provided folder based and string based caching for templates for an easily embeddable and reusable engine with easy to use syntax. If you just want the code and play with the samples and source go grab the latest code from the Subversion Repository at: http://www.west-wind.com:8080/svn/articles/trunk/RazorHosting/ or a snapshot from: http://www.west-wind.com/files/tools/RazorHosting.zip Getting Started Before I get into how hosting with Razor works, let’s take a look at how you can get up and running quickly with the wrapper classes provided. It only takes a few lines of code. The easiest way to use these Razor Hosting Wrappers is to use one of the two HostContainers provided. One is for hosting Razor scripts in a directory and rendering them as relative paths from these script files on disk. The other HostContainer serves razor scripts from string templates… Let’s start with a very simple template that displays some simple expressions, some code blocks and demonstrates rendering some data from contextual data that you pass to the template in the form of a ‘context’. Here’s a simple Razor template: @using System.Reflection Hello @Context.FirstName! Your entry was entered on: @Context.Entered @{ // Code block: Update the host Windows Form passed in through the context Context.WinForm.Text = "Hello World from Razor at " + DateTime.Now.ToString(); } AppDomain Id: @AppDomain.CurrentDomain.FriendlyName Assembly: @Assembly.GetExecutingAssembly().FullName Code based output: @{ // Write output with Response object from code string output = string.Empty; for (int i = 0; i < 10; i++) { output += i.ToString() + " "; } Response.Write(output); } Pretty easy to see what’s going on here. The only unusual thing in this code is the Context object which is an arbitrary object I’m passing from the host to the template by way of the template base class. I’m also displaying the current AppDomain and the executing Assembly name so you can see how compiling and running a template actually loads up new assemblies. Also note that as part of my context I’m passing a reference to the current Windows Form down to the template and changing the title from within the script. It’s a silly example, but it demonstrates two-way communication between host and template and back which can be very powerful. The easiest way to quickly render this template is to use the RazorEngine<TTemplateBase> class. The generic parameter specifies a template base class type that is used by Razor internally to generate the class it generates from a template. The default implementation provided in my RazorHosting wrapper is RazorTemplateBase. Here’s a simple one that renders from a string and outputs a string: var engine = new RazorEngine<RazorTemplateBase>(); // we can pass any object as context - here create a custom context var context = new CustomContext() { WinForm = this, FirstName = "Rick", Entered = DateTime.Now.AddDays(-10) }; string output = engine.RenderTemplate(this.txtSource.Text new string[] { "System.Windows.Forms.dll" }, context); if (output == null) this.txtResult.Text = "*** ERROR:\r\n" + engine.ErrorMessage; else this.txtResult.Text = output; Simple enough. This code renders a template from a string input and returns a result back as a string. It  creates a custom context and passes that to the template which can then access the Context’s properties. Note that anything passed as ‘context’ must be serializable (or MarshalByRefObject) – otherwise you get an exception when passing the reference over AppDomain boundaries (discussed later). Passing a context is optional, but is a key feature in being able to share data between the host application and the template. Note that we use the Context object to access FirstName, Entered and even the host Windows Form object which is used in the template to change the Window caption from within the script! In the code above all the work happens in the RenderTemplate method which provide a variety of overloads to read and write to and from strings, files and TextReaders/Writers. Here’s another example that renders from a file input using a TextReader: using (reader = new StreamReader("templates\\simple.csHtml", true)) { result = host.RenderTemplate(reader, new string[] { "System.Windows.Forms.dll" }, this.CustomContext); } RenderTemplate() is fairly high level and it handles loading of the runtime, compiling into an assembly and rendering of the template. If you want more control you can use the lower level methods to control each step of the way which is important for the HostContainers I’ll discuss later. Basically for those scenarios you want to separate out loading of the engine, compiling into an assembly and then rendering the template from the assembly. Why? So we can keep assemblies cached. In the code above a new assembly is created for each template rendered which is inefficient and uses up resources. Depending on the size of your templates and how often you fire them you can chew through memory very quickly. This slighter lower level approach is only a couple of extra steps: // we can pass any object as context - here create a custom context var context = new CustomContext() { WinForm = this, FirstName = "Rick", Entered = DateTime.Now.AddDays(-10) }; var engine = new RazorEngine<RazorTemplateBase>(); string assId = null; using (StringReader reader = new StringReader(this.txtSource.Text)) { assId = engine.ParseAndCompileTemplate(new string[] { "System.Windows.Forms.dll" }, reader); } string output = engine.RenderTemplateFromAssembly(assId, context); if (output == null) this.txtResult.Text = "*** ERROR:\r\n" + engine.ErrorMessage; else this.txtResult.Text = output; The difference here is that you can capture the assembly – or rather an Id to it – and potentially hold on to it to render again later assuming the template hasn’t changed. The HostContainers take advantage of this feature to cache the assemblies based on certain criteria like a filename and file time step or a string hash that if not change indicate that an assembly can be reused. Note that ParseAndCompileTemplate returns an assembly Id rather than the assembly itself. This is done so that that the assembly always stays in the host’s AppDomain and is not passed across AppDomain boundaries which would cause load failures. We’ll talk more about this in a minute but for now just realize that assemblies references are stored in a list and are accessible by this ID to allow locating and re-executing of the assembly based on that id. Reuse of the assembly avoids recompilation overhead and creation of yet another assembly that loads into the current AppDomain. You can play around with several different versions of the above code in the main sample form:   Using Hosting Containers for more Control and Caching The above examples simply render templates into assemblies each and every time they are executed. While this works and is even reasonably fast, it’s not terribly efficient. If you render templates more than once it would be nice if you could cache the generated assemblies for example to avoid re-compiling and creating of a new assembly each time. Additionally it would be nice to load template assemblies into a separate AppDomain optionally to be able to be able to unload assembli es and also to protect your host application from scripting attacks with malicious template code. Hosting containers provide also provide a wrapper around the RazorEngine<T> instance, a factory (which allows creation in separate AppDomains) and an easy way to start and stop the container ‘runtime’. The Razor Hosting samples provide two hosting containers: RazorFolderHostContainer and StringHostContainer. The folder host provides a simple runtime environment for a folder structure similar in the way that the ASP.NET runtime handles a virtual directory as it’s ‘application' root. Templates are loaded from disk in relative paths and the resulting assemblies are cached unless the template on disk is changed. The string host also caches templates based on string hashes – if the same string is passed a second time a cached version of the assembly is used. Here’s how HostContainers work. I’ll use the FolderHostContainer because it’s likely the most common way you’d use templates – from disk based templates that can be easily edited and maintained on disk. The first step is to create an instance of it and keep it around somewhere (in the example it’s attached as a property to the Form): RazorFolderHostContainer Host = new RazorFolderHostContainer(); public RazorFolderHostForm() { InitializeComponent(); // The base path for templates - templates are rendered with relative paths // based on this path. Host.TemplatePath = Path.Combine(Environment.CurrentDirectory, TemplateBaseFolder); // Add any assemblies you want reference in your templates Host.ReferencedAssemblies.Add("System.Windows.Forms.dll"); // Start up the host container Host.Start(); } Next anytime you want to render a template you can use simple code like this: private void RenderTemplate(string fileName) { // Pass the template path via the Context var relativePath = Utilities.GetRelativePath(fileName, Host.TemplatePath); if (!Host.RenderTemplate(relativePath, this.Context, Host.RenderingOutputFile)) { MessageBox.Show("Error: " + Host.ErrorMessage); return; } this.webBrowser1.Navigate("file://" + Host.RenderingOutputFile); } You can also render the output to a string instead of to a file: string result = Host.RenderTemplateToString(relativePath,context); Finally if you want to release the engine and shut down the hosting AppDomain you can simply do: Host.Stop(); Stopping the AppDomain and restarting it (ie. calling Stop(); followed by Start()) is also a nice way to release all resources in the AppDomain. The FolderBased domain also supports partial Rendering based on root path based relative paths with the same caching characteristics as the main templates. From within a template you can call out to a partial like this: @RenderPartial(@"partials\PartialRendering.cshtml", Context) where partials\PartialRendering.cshtml is a relative to the template root folder. The folder host example lets you load up templates from disk and display the result in a Web Browser control which demonstrates using Razor HTML output from templates that contain HTML syntax which happens to me my target scenario for Html Help Builder.   The Razor Engine Wrapper Project The project I created to wrap Razor hosting has a fair bit of code and a number of classes associated with it. Most of the components are internally used and as you can see using the final RazorEngine<T> and HostContainer classes is pretty easy. The classes are extensible and I suspect developers will want to build more customized host containers for their applications. Host containers are the key to wrapping up all functionality – Engine, BaseTemplate, AppDomain Hosting, Caching etc in a logical piece that is ready to be plugged into an application. When looking at the code there are a couple of core features provided: Core Razor Engine Hosting This is the core Razor hosting which provides the basics of loading a template, compiling it into an assembly and executing it. This is fairly straightforward, but without a host container that can cache assemblies based on some criteria templates are recompiled and re-created each time which is inefficient (although pretty fast). The base engine wrapper implementation also supports hosting the Razor runtime in a separate AppDomain for security and the ability to unload it on demand. Host Containers The engine hosting itself doesn’t provide any sort of ‘runtime’ service like picking up files from disk, caching assemblies and so forth. So my implementation provides two HostContainers: RazorFolderHostContainer and RazorStringHostContainer. The FolderHost works off a base directory and loads templates based on relative paths (sort of like the ASP.NET runtime does off a virtual). The HostContainers also deal with caching of template assemblies – for the folder host the file date is tracked and checked for updates and unless the template is changed a cached assembly is reused. The StringHostContainer similiarily checks string hashes to figure out whether a particular string template was previously compiled and executed. The HostContainers also act as a simple startup environment and a single reference to easily store and reuse in an application. TemplateBase Classes The template base classes are the base classes that from which the Razor engine generates .NET code. A template is parsed into a class with an Execute() method and the class is based on this template type you can specify. RazorEngine<TBaseTemplate> can receive this type and the HostContainers default to specific templates in their base implementations. Template classes are customizable to allow you to create templates that provide application specific features and interaction from the template to your host application. How does the RazorEngine wrapper work? You can browse the source code in the links above or in the repository or download the source, but I’ll highlight some key features here. Here’s part of the RazorEngine implementation that can be used to host the runtime and that demonstrates the key code required to host the Razor runtime. The RazorEngine class is implemented as a generic class to reflect the Template base class type: public class RazorEngine<TBaseTemplateType> : MarshalByRefObject where TBaseTemplateType : RazorTemplateBase The generic type is used to internally provide easier access to the template type and assignments on it as part of the template processing. The class also inherits MarshalByRefObject to allow execution over AppDomain boundaries – something that all the classes discussed here need to do since there is much interaction between the host and the template. The first two key methods deal with creating a template assembly: /// <summary> /// Creates an instance of the RazorHost with various options applied. /// Applies basic namespace imports and the name of the class to generate /// </summary> /// <param name="generatedNamespace"></param> /// <param name="generatedClass"></param> /// <returns></returns> protected RazorTemplateEngine CreateHost(string generatedNamespace, string generatedClass) { Type baseClassType = typeof(TBaseTemplateType); RazorEngineHost host = new RazorEngineHost(new CSharpRazorCodeLanguage()); host.DefaultBaseClass = baseClassType.FullName; host.DefaultClassName = generatedClass; host.DefaultNamespace = generatedNamespace; host.NamespaceImports.Add("System"); host.NamespaceImports.Add("System.Text"); host.NamespaceImports.Add("System.Collections.Generic"); host.NamespaceImports.Add("System.Linq"); host.NamespaceImports.Add("System.IO"); return new RazorTemplateEngine(host); } /// <summary> /// Parses and compiles a markup template into an assembly and returns /// an assembly name. The name is an ID that can be passed to /// ExecuteTemplateByAssembly which picks up a cached instance of the /// loaded assembly. /// /// </summary> /// <param name="namespaceOfGeneratedClass">The namespace of the class to generate from the template</param> /// <param name="generatedClassName">The name of the class to generate from the template</param> /// <param name="ReferencedAssemblies">Any referenced assemblies by dll name only. Assemblies must be in execution path of host or in GAC.</param> /// <param name="templateSourceReader">Textreader that loads the template</param> /// <remarks> /// The actual assembly isn't returned here to allow for cross-AppDomain /// operation. If the assembly was returned it would fail for cross-AppDomain /// calls. /// </remarks> /// <returns>An assembly Id. The Assembly is cached in memory and can be used with RenderFromAssembly.</returns> public string ParseAndCompileTemplate( string namespaceOfGeneratedClass, string generatedClassName, string[] ReferencedAssemblies, TextReader templateSourceReader) { RazorTemplateEngine engine = CreateHost(namespaceOfGeneratedClass, generatedClassName); // Generate the template class as CodeDom GeneratorResults razorResults = engine.GenerateCode(templateSourceReader); // Create code from the codeDom and compile CSharpCodeProvider codeProvider = new CSharpCodeProvider(); CodeGeneratorOptions options = new CodeGeneratorOptions(); // Capture Code Generated as a string for error info // and debugging LastGeneratedCode = null; using (StringWriter writer = new StringWriter()) { codeProvider.GenerateCodeFromCompileUnit(razorResults.GeneratedCode, writer, options); LastGeneratedCode = writer.ToString(); } CompilerParameters compilerParameters = new CompilerParameters(ReferencedAssemblies); // Standard Assembly References compilerParameters.ReferencedAssemblies.Add("System.dll"); compilerParameters.ReferencedAssemblies.Add("System.Core.dll"); compilerParameters.ReferencedAssemblies.Add("Microsoft.CSharp.dll"); // dynamic support! // Also add the current assembly so RazorTemplateBase is available compilerParameters.ReferencedAssemblies.Add(Assembly.GetExecutingAssembly().CodeBase.Substring(8)); compilerParameters.GenerateInMemory = Configuration.CompileToMemory; if (!Configuration.CompileToMemory) compilerParameters.OutputAssembly = Path.Combine(Configuration.TempAssemblyPath, "_" + Guid.NewGuid().ToString("n") + ".dll"); CompilerResults compilerResults = codeProvider.CompileAssemblyFromDom(compilerParameters, razorResults.GeneratedCode); if (compilerResults.Errors.Count > 0) { var compileErrors = new StringBuilder(); foreach (System.CodeDom.Compiler.CompilerError compileError in compilerResults.Errors) compileErrors.Append(String.Format(Resources.LineX0TColX1TErrorX2RN, compileError.Line, compileError.Column, compileError.ErrorText)); this.SetError(compileErrors.ToString() + "\r\n" + LastGeneratedCode); return null; } AssemblyCache.Add(compilerResults.CompiledAssembly.FullName, compilerResults.CompiledAssembly); return compilerResults.CompiledAssembly.FullName; } Think of the internal CreateHost() method as setting up the assembly generated from each template. Each template compiles into a separate assembly. It sets up namespaces, and assembly references, the base class used and the name and namespace for the generated class. ParseAndCompileTemplate() then calls the CreateHost() method to receive the template engine generator which effectively generates a CodeDom from the template – the template is turned into .NET code. The code generated from our earlier example looks something like this: //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.1 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace RazorTest { using System; using System.Text; using System.Collections.Generic; using System.Linq; using System.IO; using System.Reflection; public class RazorTemplate : RazorHosting.RazorTemplateBase { #line hidden public RazorTemplate() { } public override void Execute() { WriteLiteral("Hello "); Write(Context.FirstName); WriteLiteral("! Your entry was entered on: "); Write(Context.Entered); WriteLiteral("\r\n\r\n"); // Code block: Update the host Windows Form passed in through the context Context.WinForm.Text = "Hello World from Razor at " + DateTime.Now.ToString(); WriteLiteral("\r\nAppDomain Id:\r\n "); Write(AppDomain.CurrentDomain.FriendlyName); WriteLiteral("\r\n \r\nAssembly:\r\n "); Write(Assembly.GetExecutingAssembly().FullName); WriteLiteral("\r\n\r\nCode based output: \r\n"); // Write output with Response object from code string output = string.Empty; for (int i = 0; i < 10; i++) { output += i.ToString() + " "; } } } } Basically the template’s body is turned into code in an Execute method that is called. Internally the template’s Write method is fired to actually generate the output. Note that the class inherits from RazorTemplateBase which is the generic parameter I used to specify the base class when creating an instance in my RazorEngine host: var engine = new RazorEngine<RazorTemplateBase>(); This template class must be provided and it must implement an Execute() and Write() method. Beyond that you can create any class you chose and attach your own properties. My RazorTemplateBase class implementation is very simple: public class RazorTemplateBase : MarshalByRefObject, IDisposable { /// <summary> /// You can pass in a generic context object /// to use in your template code /// </summary> public dynamic Context { get; set; } /// <summary> /// Class that generates output. Currently ultra simple /// with only Response.Write() implementation. /// </summary> public RazorResponse Response { get; set; } public object HostContainer {get; set; } public object Engine { get; set; } public RazorTemplateBase() { Response = new RazorResponse(); } public virtual void Write(object value) { Response.Write(value); } public virtual void WriteLiteral(object value) { Response.Write(value); } /// <summary> /// Razor Parser implements this method /// </summary> public virtual void Execute() {} public virtual void Dispose() { if (Response != null) { Response.Dispose(); Response = null; } } } Razor fills in the Execute method when it generates its subclass and uses the Write() method to output content. As you can see I use a RazorResponse() class here to generate output. This isn’t necessary really, as you could use a StringBuilder or StringWriter() directly, but I prefer using Response object so I can extend the Response behavior as needed. The RazorResponse class is also very simple and merely acts as a wrapper around a TextWriter: public class RazorResponse : IDisposable { /// <summary> /// Internal text writer - default to StringWriter() /// </summary> public TextWriter Writer = new StringWriter(); public virtual void Write(object value) { Writer.Write(value); } public virtual void WriteLine(object value) { Write(value); Write("\r\n"); } public virtual void WriteFormat(string format, params object[] args) { Write(string.Format(format, args)); } public override string ToString() { return Writer.ToString(); } public virtual void Dispose() { Writer.Close(); } public virtual void SetTextWriter(TextWriter writer) { // Close original writer if (Writer != null) Writer.Close(); Writer = writer; } } The Rendering Methods of RazorEngine At this point I’ve talked about the assembly generation logic and the template implementation itself. What’s left is that once you’ve generated the assembly is to execute it. The code to do this is handled in the various RenderXXX methods of the RazorEngine class. Let’s look at the lowest level one of these which is RenderTemplateFromAssembly() and a couple of internal support methods that handle instantiating and invoking of the generated template method: public string RenderTemplateFromAssembly( string assemblyId, string generatedNamespace, string generatedClass, object context, TextWriter outputWriter) { this.SetError(); Assembly generatedAssembly = AssemblyCache[assemblyId]; if (generatedAssembly == null) { this.SetError(Resources.PreviouslyCompiledAssemblyNotFound); return null; } string className = generatedNamespace + "." + generatedClass; Type type; try { type = generatedAssembly.GetType(className); } catch (Exception ex) { this.SetError(Resources.UnableToCreateType + className + ": " + ex.Message); return null; } // Start with empty non-error response (if we use a writer) string result = string.Empty; using(TBaseTemplateType instance = InstantiateTemplateClass(type)) { if (instance == null) return null; if (outputWriter != null) instance.Response.SetTextWriter(outputWriter); if (!InvokeTemplateInstance(instance, context)) return null; // Capture string output if implemented and return // otherwise null is returned if (outputWriter == null) result = instance.Response.ToString(); } return result; } protected virtual TBaseTemplateType InstantiateTemplateClass(Type type) { TBaseTemplateType instance = Activator.CreateInstance(type) as TBaseTemplateType; if (instance == null) { SetError(Resources.CouldnTActivateTypeInstance + type.FullName); return null; } instance.Engine = this; // If a HostContainer was set pass that to the template too instance.HostContainer = this.HostContainer; return instance; } /// <summary> /// Internally executes an instance of the template, /// captures errors on execution and returns true or false /// </summary> /// <param name="instance">An instance of the generated template</param> /// <returns>true or false - check ErrorMessage for errors</returns> protected virtual bool InvokeTemplateInstance(TBaseTemplateType instance, object context) { try { instance.Context = context; instance.Execute(); } catch (Exception ex) { this.SetError(Resources.TemplateExecutionError + ex.Message); return false; } finally { // Must make sure Response is closed instance.Response.Dispose(); } return true; } The RenderTemplateFromAssembly method basically requires the namespace and class to instantate and creates an instance of the class using InstantiateTemplateClass(). It then invokes the method with InvokeTemplateInstance(). These two methods are broken out because they are re-used by various other rendering methods and also to allow subclassing and providing additional configuration tasks to set properties and pass values to templates at execution time. In the default mode instantiation sets the Engine and HostContainer (discussed later) so the template can call back into the template engine, and the context is set when the template method is invoked. The various RenderXXX methods use similar code although they create the assemblies first. If you’re after potentially cashing assemblies the method is the one to call and that’s exactly what the two HostContainer classes do. More on that in a minute, but before we get into HostContainers let’s talk about AppDomain hosting and the like. Running Templates in their own AppDomain With the RazorEngine class above, when a template is parsed into an assembly and executed the assembly is created (in memory or on disk – you can configure that) and cached in the current AppDomain. In .NET once an assembly has been loaded it can never be unloaded so if you’re loading lots of templates and at some time you want to release them there’s no way to do so. If however you load the assemblies in a separate AppDomain that new AppDomain can be unloaded and the assemblies loaded in it with it. In order to host the templates in a separate AppDomain the easiest thing to do is to run the entire RazorEngine in a separate AppDomain. Then all interaction occurs in the other AppDomain and no further changes have to be made. To facilitate this there is a RazorEngineFactory which has methods that can instantiate the RazorHost in a separate AppDomain as well as in the local AppDomain. The host creates the remote instance and then hangs on to it to keep it alive as well as providing methods to shut down the AppDomain and reload the engine. Sounds complicated but cross-AppDomain invocation is actually fairly easy to implement. Here’s some of the relevant code from the RazorEngineFactory class. Like the RazorEngine this class is generic and requires a template base type in the generic class name: public class RazorEngineFactory<TBaseTemplateType> where TBaseTemplateType : RazorTemplateBase Here are the key methods of interest: /// <summary> /// Creates an instance of the RazorHost in a new AppDomain. This /// version creates a static singleton that that is cached and you /// can call UnloadRazorHostInAppDomain to unload it. /// </summary> /// <returns></returns> public static RazorEngine<TBaseTemplateType> CreateRazorHostInAppDomain() { if (Current == null) Current = new RazorEngineFactory<TBaseTemplateType>(); return Current.GetRazorHostInAppDomain(); } public static void UnloadRazorHostInAppDomain() { if (Current != null) Current.UnloadHost(); Current = null; } /// <summary> /// Instance method that creates a RazorHost in a new AppDomain. /// This method requires that you keep the Factory around in /// order to keep the AppDomain alive and be able to unload it. /// </summary> /// <returns></returns> public RazorEngine<TBaseTemplateType> GetRazorHostInAppDomain() { LocalAppDomain = CreateAppDomain(null); if (LocalAppDomain == null) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! RazorEngine<TBaseTemplateType> host = null; try { Assembly ass = Assembly.GetExecutingAssembly(); string AssemblyPath = ass.Location; host = (RazorEngine<TBaseTemplateType>) LocalAppDomain.CreateInstanceFrom(AssemblyPath, typeof(RazorEngine<TBaseTemplateType>).FullName).Unwrap(); } catch (Exception ex) { ErrorMessage = ex.Message; return null; } return host; } /// <summary> /// Internally creates a new AppDomain in which Razor templates can /// be run. /// </summary> /// <param name="appDomainName"></param> /// <returns></returns> private AppDomain CreateAppDomain(string appDomainName) { if (appDomainName == null) appDomainName = "RazorHost_" + Guid.NewGuid().ToString("n"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; AppDomain localDomain = AppDomain.CreateDomain(appDomainName, null, setup); return localDomain; } /// <summary> /// Allow unloading of the created AppDomain to release resources /// All internal resources in the AppDomain are released including /// in memory compiled Razor assemblies. /// </summary> public void UnloadHost() { if (this.LocalAppDomain != null) { AppDomain.Unload(this.LocalAppDomain); this.LocalAppDomain = null; } } The static CreateRazorHostInAppDomain() is the key method that startup code usually calls. It uses a Current singleton instance to an instance of itself that is created cross AppDomain and is kept alive because it’s static. GetRazorHostInAppDomain actually creates a cross-AppDomain instance which first creates a new AppDomain and then loads the RazorEngine into it. The remote Proxy instance is returned as a result to the method and can be used the same as a local instance. The code to run with a remote AppDomain is simple: private RazorEngine<RazorTemplateBase> CreateHost() { if (this.Host != null) return this.Host; // Use Static Methods - no error message if host doesn't load this.Host = RazorEngineFactory<RazorTemplateBase>.CreateRazorHostInAppDomain(); if (this.Host == null) { MessageBox.Show("Unable to load Razor Template Host", "Razor Hosting", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); } return this.Host; } This code relies on a local reference of the Host which is kept around for the duration of the app (in this case a form reference). To use this you’d simply do: this.Host = CreateHost(); if (host == null) return; string result = host.RenderTemplate( this.txtSource.Text, new string[] { "System.Windows.Forms.dll", "Westwind.Utilities.dll" }, this.CustomContext); if (result == null) { MessageBox.Show(host.ErrorMessage, "Template Execution Error", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); return; } this.txtResult.Text = result; Now all templates run in a remote AppDomain and can be unloaded with simple code like this: RazorEngineFactory<RazorTemplateBase>.UnloadRazorHostInAppDomain(); this.Host = null; One Step further – Providing a caching ‘Runtime’ Once we can load templates in a remote AppDomain we can add some additional functionality like assembly caching based on application specific features. One of my typical scenarios is to render templates out of a scripts folder. So all templates live in a folder and they change infrequently. So a Folder based host that can compile these templates once and then only recompile them if something changes would be ideal. Enter host containers which are basically wrappers around the RazorEngine<t> and RazorEngineFactory<t>. They provide additional logic for things like file caching based on changes on disk or string hashes for string based template inputs. The folder host also provides for partial rendering logic through a custom template base implementation. There’s a base implementation in RazorBaseHostContainer, which provides the basics for hosting a RazorEngine, which includes the ability to start and stop the engine, cache assemblies and add references: public abstract class RazorBaseHostContainer<TBaseTemplateType> : MarshalByRefObject where TBaseTemplateType : RazorTemplateBase, new() { public RazorBaseHostContainer() { UseAppDomain = true; GeneratedNamespace = "__RazorHost"; } /// <summary> /// Determines whether the Container hosts Razor /// in a separate AppDomain. Seperate AppDomain /// hosting allows unloading and releasing of /// resources. /// </summary> public bool UseAppDomain { get; set; } /// <summary> /// Base folder location where the AppDomain /// is hosted. By default uses the same folder /// as the host application. /// /// Determines where binary dependencies are /// found for assembly references. /// </summary> public string BaseBinaryFolder { get; set; } /// <summary> /// List of referenced assemblies as string values. /// Must be in GAC or in the current folder of the host app/ /// base BinaryFolder /// </summary> public List<string> ReferencedAssemblies = new List<string>(); /// <summary> /// Name of the generated namespace for template classes /// </summary> public string GeneratedNamespace {get; set; } /// <summary> /// Any error messages /// </summary> public string ErrorMessage { get; set; } /// <summary> /// Cached instance of the Host. Required to keep the /// reference to the host alive for multiple uses. /// </summary> public RazorEngine<TBaseTemplateType> Engine; /// <summary> /// Cached instance of the Host Factory - so we can unload /// the host and its associated AppDomain. /// </summary> protected RazorEngineFactory<TBaseTemplateType> EngineFactory; /// <summary> /// Keep track of each compiled assembly /// and when it was compiled. /// /// Use a hash of the string to identify string /// changes. /// </summary> protected Dictionary<int, CompiledAssemblyItem> LoadedAssemblies = new Dictionary<int, CompiledAssemblyItem>(); /// <summary> /// Call to start the Host running. Follow by a calls to RenderTemplate to /// render individual templates. Call Stop when done. /// </summary> /// <returns>true or false - check ErrorMessage on false </returns> public virtual bool Start() { if (Engine == null) { if (UseAppDomain) Engine = RazorEngineFactory<TBaseTemplateType>.CreateRazorHostInAppDomain(); else Engine = RazorEngineFactory<TBaseTemplateType>.CreateRazorHost(); Engine.Configuration.CompileToMemory = true; Engine.HostContainer = this; if (Engine == null) { this.ErrorMessage = EngineFactory.ErrorMessage; return false; } } return true; } /// <summary> /// Stops the Host and releases the host AppDomain and cached /// assemblies. /// </summary> /// <returns>true or false</returns> public bool Stop() { this.LoadedAssemblies.Clear(); RazorEngineFactory<RazorTemplateBase>.UnloadRazorHostInAppDomain(); this.Engine = null; return true; } … } This base class provides most of the mechanics to host the runtime, but no application specific implementation for rendering. There are rendering functions but they just call the engine directly and provide no caching – there’s no context to decide how to cache and reuse templates. The key methods are Start and Stop and their main purpose is to start a new AppDomain (optionally) and shut it down when requested. The RazorFolderHostContainer – Folder Based Runtime Hosting Let’s look at the more application specific RazorFolderHostContainer implementation which is defined like this: public class RazorFolderHostContainer : RazorBaseHostContainer<RazorTemplateFolderHost> Note that a customized RazorTemplateFolderHost class template is used for this implementation that supports partial rendering in form of a RenderPartial() method that’s available to templates. The folder host’s features are: Render templates based on a Template Base Path (a ‘virtual’ if you will) Cache compiled assemblies based on the relative path and file time stamp File changes on templates cause templates to be recompiled into new assemblies Support for partial rendering using base folder relative pathing As shown in the startup examples earlier host containers require some startup code with a HostContainer tied to a persistent property (like a Form property): // The base path for templates - templates are rendered with relative paths // based on this path. HostContainer.TemplatePath = Path.Combine(Environment.CurrentDirectory, TemplateBaseFolder); // Default output rendering disk location HostContainer.RenderingOutputFile = Path.Combine(HostContainer.TemplatePath, "__Preview.htm"); // Add any assemblies you want reference in your templates HostContainer.ReferencedAssemblies.Add("System.Windows.Forms.dll"); // Start up the host container HostContainer.Start(); Once that’s done, you can render templates with the host container: // Pass the template path for full filename seleted with OpenFile Dialog // relativepath is: subdir\file.cshtml or file.cshtml or ..\file.cshtml var relativePath = Utilities.GetRelativePath(fileName, HostContainer.TemplatePath); if (!HostContainer.RenderTemplate(relativePath, Context, HostContainer.RenderingOutputFile)) { MessageBox.Show("Error: " + HostContainer.ErrorMessage); return; } webBrowser1.Navigate("file://" + HostContainer.RenderingOutputFile); The most critical task of the RazorFolderHostContainer implementation is to retrieve a template from disk, compile and cache it and then deal with deciding whether subsequent requests need to re-compile the template or simply use a cached version. Internally the GetAssemblyFromFileAndCache() handles this task: /// <summary> /// Internally checks if a cached assembly exists and if it does uses it /// else creates and compiles one. Returns an assembly Id to be /// used with the LoadedAssembly list. /// </summary> /// <param name="relativePath"></param> /// <param name="context"></param> /// <returns></returns> protected virtual CompiledAssemblyItem GetAssemblyFromFileAndCache(string relativePath) { string fileName = Path.Combine(TemplatePath, relativePath).ToLower(); int fileNameHash = fileName.GetHashCode(); if (!File.Exists(fileName)) { this.SetError(Resources.TemplateFileDoesnTExist + fileName); return null; } CompiledAssemblyItem item = null; this.LoadedAssemblies.TryGetValue(fileNameHash, out item); string assemblyId = null; // Check for cached instance if (item != null) { var fileTime = File.GetLastWriteTimeUtc(fileName); if (fileTime <= item.CompileTimeUtc) assemblyId = item.AssemblyId; } else item = new CompiledAssemblyItem(); // No cached instance - create assembly and cache if (assemblyId == null) { string safeClassName = GetSafeClassName(fileName); StreamReader reader = null; try { reader = new StreamReader(fileName, true); } catch (Exception ex) { this.SetError(Resources.ErrorReadingTemplateFile + fileName); return null; } assemblyId = Engine.ParseAndCompileTemplate(this.ReferencedAssemblies.ToArray(), reader); // need to ensure reader is closed if (reader != null) reader.Close(); if (assemblyId == null) { this.SetError(Engine.ErrorMessage); return null; } item.AssemblyId = assemblyId; item.CompileTimeUtc = DateTime.UtcNow; item.FileName = fileName; item.SafeClassName = safeClassName; this.LoadedAssemblies[fileNameHash] = item; } return item; } This code uses a LoadedAssembly dictionary which is comprised of a structure that holds a reference to a compiled assembly, a full filename and file timestamp and an assembly id. LoadedAssemblies (defined on the base class shown earlier) is essentially a cache for compiled assemblies and they are identified by a hash id. In the case of files the hash is a GetHashCode() from the full filename of the template. The template is checked for in the cache and if not found the file stamp is checked. If that’s newer than the cache’s compilation date the template is recompiled otherwise the version in the cache is used. All the core work defers to a RazorEngine<T> instance to ParseAndCompileTemplate(). The three rendering specific methods then are rather simple implementations with just a few lines of code dealing with parameter and return value parsing: /// <summary> /// Renders a template to a TextWriter. Useful to write output into a stream or /// the Response object. Used for partial rendering. /// </summary> /// <param name="relativePath">Relative path to the file in the folder structure</param> /// <param name="context">Optional context object or null</param> /// <param name="writer">The textwriter to write output into</param> /// <returns></returns> public bool RenderTemplate(string relativePath, object context, TextWriter writer) { // Set configuration data that is to be passed to the template (any object) Engine.TemplatePerRequestConfigurationData = new RazorFolderHostTemplateConfiguration() { TemplatePath = Path.Combine(this.TemplatePath, relativePath), TemplateRelativePath = relativePath, }; CompiledAssemblyItem item = GetAssemblyFromFileAndCache(relativePath); if (item == null) { writer.Close(); return false; } try { // String result will be empty as output will be rendered into the // Response object's stream output. However a null result denotes // an error string result = Engine.RenderTemplateFromAssembly(item.AssemblyId, context, writer); if (result == null) { this.SetError(Engine.ErrorMessage); return false; } } catch (Exception ex) { this.SetError(ex.Message); return false; } finally { writer.Close(); } return true; } /// <summary> /// Render a template from a source file on disk to a specified outputfile. /// </summary> /// <param name="relativePath">Relative path off the template root folder. Format: path/filename.cshtml</param> /// <param name="context">Any object that will be available in the template as a dynamic of this.Context</param> /// <param name="outputFile">Optional - output file where output is written to. If not specified the /// RenderingOutputFile property is used instead /// </param> /// <returns>true if rendering succeeds, false on failure - check ErrorMessage</returns> public bool RenderTemplate(string relativePath, object context, string outputFile) { if (outputFile == null) outputFile = RenderingOutputFile; try { using (StreamWriter writer = new StreamWriter(outputFile, false, Engine.Configuration.OutputEncoding, Engine.Configuration.StreamBufferSize)) { return RenderTemplate(relativePath, context, writer); } } catch (Exception ex) { this.SetError(ex.Message); return false; } return true; } /// <summary> /// Renders a template to string. Useful for RenderTemplate /// </summary> /// <param name="relativePath"></param> /// <param name="context"></param> /// <returns></returns> public string RenderTemplateToString(string relativePath, object context) { string result = string.Empty; try { using (StringWriter writer = new StringWriter()) { // String result will be empty as output will be rendered into the // Response object's stream output. However a null result denotes // an error if (!RenderTemplate(relativePath, context, writer)) { this.SetError(Engine.ErrorMessage); return null; } result = writer.ToString(); } } catch (Exception ex) { this.SetError(ex.Message); return null; } return result; } The idea is that you can create custom host container implementations that do exactly what you want fairly easily. Take a look at both the RazorFolderHostContainer and RazorStringHostContainer classes for the basic concepts you can use to create custom implementations. Notice also that you can set the engine’s PerRequestConfigurationData() from the host container: // Set configuration data that is to be passed to the template (any object) Engine.TemplatePerRequestConfigurationData = new RazorFolderHostTemplateConfiguration() { TemplatePath = Path.Combine(this.TemplatePath, relativePath), TemplateRelativePath = relativePath, }; which when set to a non-null value is passed to the Template’s InitializeTemplate() method. This method receives an object parameter which you can cast as needed: public override void InitializeTemplate(object configurationData) { // Pick up configuration data and stuff into Request object RazorFolderHostTemplateConfiguration config = configurationData as RazorFolderHostTemplateConfiguration; this.Request.TemplatePath = config.TemplatePath; this.Request.TemplateRelativePath = config.TemplateRelativePath; } With this data you can then configure any custom properties or objects on your main template class. It’s an easy way to pass data from the HostContainer all the way down into the template. The type you use is of type object so you have to cast it yourself, and it must be serializable since it will likely run in a separate AppDomain. This might seem like an ugly way to pass data around – normally I’d use an event delegate to call back from the engine to the host, but since this is running over AppDomain boundaries events get really tricky and passing a template instance back up into the host over AppDomain boundaries doesn’t work due to serialization issues. So it’s easier to pass the data from the host down into the template using this rather clumsy approach of set and forward. It’s ugly, but it’s something that can be hidden in the host container implementation as I’ve done here. It’s also not something you have to do in every implementation so this is kind of an edge case, but I know I’ll need to pass a bunch of data in some of my applications and this will be the easiest way to do so. Summing Up Hosting the Razor runtime is something I got jazzed up about quite a bit because I have an immediate need for this type of templating/merging/scripting capability in an application I’m working on. I’ve also been using templating in many apps and it’s always been a pain to deal with. The Razor engine makes this whole experience a lot cleaner and more light weight and with these wrappers I can now plug .NET based templating into my code literally with a few lines of code. That’s something to cheer about… I hope some of you will find this useful as well… Resources The examples and code require that you download the Razor runtimes. Projects are for Visual Studio 2010 running on .NET 4.0 Platform Installer 3.0 (install WebMatrix or MVC 3 for Razor Runtimes) Latest Code in Subversion Repository Download Snapshot of the Code Documentation (CHM Help File) © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  .NET  

    Read the article

  • $_POST data returns empty when headers are > POST_MAX_SIZE

    - by Jared
    Hi Hopefully someone here might have an answer to my question. I have a basic form that contains simple fields, like name, number, email address etc and 1 file upload field. I am trying to add some validation into my script that detects if the file is too large and then rejects the user back to the form to select/upload a smaller file. My problem is, if a user selects a file that is bigger than my validation file size rule and larger than php.ini POST_MAX_SIZE/UPLOAD_MAX_FILESIZE and pushes submit, then PHP seems to try process the form only to fail on the POST_MAX_SIZE settings and then clears the entire $_POST array and returns nothing back to the form. Is there a way around this? Surely if someone uploads something than the max size configured in the php.ini then you can still get the rest of the $_POST data??? Here is my code. <?php function validEmail($email) { $isValid = true; $atIndex = strrpos($email, "@"); if (is_bool($atIndex) && !$atIndex) { $isValid = false; } else { $domain = substr($email, $atIndex+1); $local = substr($email, 0, $atIndex); $localLen = strlen($local); $domainLen = strlen($domain); if ($localLen < 1 || $localLen > 64) { // local part length exceeded $isValid = false; } else if ($domainLen < 1 || $domainLen > 255) { // domain part length exceeded $isValid = false; } else if ($local[0] == '.' || $local[$localLen-1] == '.') { // local part starts or ends with '.' $isValid = false; } else if (preg_match('/\\.\\./', $local)) { // local part has two consecutive dots $isValid = false; } else if (!preg_match('/^[A-Za-z0-9\\-\\.]+$/', $domain)) { // character not valid in domain part $isValid = false; } else if (preg_match('/\\.\\./', $domain)) { // domain part has two consecutive dots $isValid = false; } else if (!preg_match('/^(\\\\.|[A-Za-z0-9!#%&`_=\\/$\'*+?^{}|~.-])+$/', str_replace("\\\\","",$local))) { // character not valid in local part unless // local part is quoted if (!preg_match('/^"(\\\\"|[^"])+"$/', str_replace("\\\\","",$local))) { $isValid = false; } } } return $isValid; } //setup post variables @$name = htmlspecialchars(trim($_REQUEST['name'])); @$emailCheck = htmlspecialchars(trim($_REQUEST['email'])); @$organisation = htmlspecialchars(trim($_REQUEST['organisation'])); @$title = htmlspecialchars(trim($_REQUEST['title'])); @$phone = htmlspecialchars(trim($_REQUEST['phone'])); @$location = htmlspecialchars(trim($_REQUEST['location'])); @$description = htmlspecialchars(trim($_REQUEST['description'])); @$fileError = 0; @$phoneError = ""; //setup file upload handler $target_path = 'uploads/'; $filename = basename( @$_FILES['uploadedfile']['name']); $max_size = 8000000; // maximum file size (8mb in bytes) NB: php.ini max filesize upload is 10MB on test environment. $allowed_filetypes = Array(".pdf", ".doc", ".zip", ".txt", ".xls", ".docx", ".csv", ".rtf"); //put extensions in here that should be uploaded only. $ext = substr($filename, strpos($filename,'.'), strlen($filename)-1); // Get the extension from the filename. if(!is_writable($target_path)) die('You cannot upload to the specified directory, please CHMOD it to 777.'); //Check if we can upload to the specified upload folder. //display form function function displayForm($name, $emailCheck, $organisation, $phone, $title, $location, $description, $phoneError, $allowed_filetypes, $ext, $filename, $fileError) { //make $emailCheck global so function can get value from global scope. global $emailCheck; global $max_size; echo '<form action="geodetic_form.php" method="post" name="contact" id="contact" enctype="multipart/form-data">'."\n". '<fieldset>'."\n".'<div>'."\n"; //name echo '<label for="name"><span class="mandatory">*</span>Your name:</label>'."\n". '<input type="text" name="name" id="name" class="inputText required" value="'. $name .'" />'."\n"; //check if name field is filled out if (isset($_REQUEST['submit']) && empty($name)) { echo '<label for="name" class="error">Please enter your name.</label>'."\n"; } echo '</div>'."\n". '<div>'."\n"; //Email echo '<label for="email"><span class="mandatory">*</span>Your email:</label>'."\n". '<input type="text" name="email" id="email" class="inputText required email" value="'. $emailCheck .'" />'."\n"; // check if email field is filled out and proper format if (isset($_REQUEST['submit']) && validEmail($emailCheck) == false) { echo '<label for="email" class="error">Invalid email address entered.</label>'."\n"; } echo '</div>'."\n". '<div>'."\n"; //organisation echo '<label for="phone">Organisation:</label>'."\n". '<input type="text" name="organisation" id="organisation" class="inputText" value="'. $organisation .'" />'."\n"; echo '</div>'."\n". '</fieldset>'."\n".'<fieldset>'. "\n" . '<div>'."\n"; //title echo '<label for="phone">Title:</label>'."\n". '<input type="text" name="title" id="title" class="inputText" value="'. $title .'" />'."\n"; echo '</div>'."\n". '</fieldset>'."\n".'<fieldset>'. "\n" . '<div>'."\n"; //phone echo '<label for="phone"><span class="mandatory">*</span>Phone <br /><span class="small">(include area code)</span>:</label>'."\n". '<input type="text" name="phone" id="phone" class="inputText required" value="'. $phone .'" />'."\n"; // check if phone field is filled out that it has numbers and not characters if (isset($_REQUEST['submit']) && $phoneError == "true" && empty($phone)) echo '<label for="email" class="error">Please enter a valid phone number.</label>'."\n"; echo '</div>'."\n". '</fieldset>'."\n".'<fieldset>'. "\n" . '<div>'."\n"; //Location echo '<label class="location" for="location"><span class="mandatory">*</span>Location:</label>'."\n". '<textarea name="location" id="location" class="required">'. $location .'</textarea>'."\n"; //check if message field is filled out if (isset($_REQUEST['submit']) && empty($_REQUEST['location'])) echo '<label for="location" class="error">This field is required.</label>'."\n"; echo '</div>'."\n". '</fieldset>'."\n".'<fieldset>'. "\n" . '<div>'."\n"; //description echo '<label class="description" for="description">Description:</label>'."\n". '<textarea name="description" id="queryComments">'. $description .'</textarea>'."\n"; echo '</div>'."\n". '</fieldset>'."\n".'<fieldset>'. "\n" . '<div>'."\n"; //file upload echo '<label class="uploadedfile" for="uploadedfile">File:</label>'."\n". '<input type="file" name="uploadedfile" id="uploadedfile" value="'. $filename .'" />'."\n"; // Check if the filetype is allowed, if not DIE and inform the user. switch ($fileError) { case "1": echo '<label for="uploadedfile" class="error">The file you attempted to upload is not allowed.</label>'; break; case "2": echo '<label for="uploadedfile" class="error">The file you attempted to upload is too large.</label>'; break; } echo '</div>'."\n". '</fieldset>'; //end of form echo '<div class="submit"><input type="submit" name="submit" value="Submit" id="submit" /></div>'. '<div class="clear"><p><br /></p></div>'; } //end function //setup error validations if (isset($_REQUEST['submit']) && !empty($_REQUEST['phone']) && !is_numeric($_REQUEST['phone'])) $phoneError = "true"; if (isset($_REQUEST['submit']) && $_FILES['uploadedfile']['error'] != 4 && !in_array($ext, $allowed_filetypes)) $fileError = 1; if (isset($_REQUEST['submit']) && $_FILES["uploadedfile"]["size"] > $max_size) $fileError = 2; echo "this condition " . $fileError; $POST_MAX_SIZE = ini_get('post_max_size'); $mul = substr($POST_MAX_SIZE, -1); $mul = ($mul == 'M' ? 1048576 : ($mul == 'K' ? 1024 : ($mul == 'G' ? 1073741824 : 1))); if ($_SERVER['CONTENT_LENGTH'] > $mul*(int)$POST_MAX_SIZE && $POST_MAX_SIZE) echo "too big!!"; echo $POST_MAX_SIZE; if(empty($name) || empty($phone) || empty($location) || validEmail($emailCheck) == false || $phoneError == "true" || $fileError != 0) { displayForm($name, $emailCheck, $organisation, $phone, $title, $location, $description, $phoneError, $allowed_filetypes, $ext, $filename, $fileError); echo $fileError; echo "max size is: " .$max_size; echo "and file size is: " . $_FILES["uploadedfile"]["size"]; exit; } else { //copy file from temp to upload directory $path_of_uploaded_file = $target_path . $filename; $tmp_path = $_FILES["uploadedfile"]["tmp_name"]; echo $tmp_path; echo "and file size is: " . filesize($_FILES["uploadedfile"]["tmp_name"]); exit; if(is_uploaded_file($tmp_path)) { if(!copy($tmp_path,$path_of_uploaded_file)) { echo 'error while copying the uploaded file'; } } //test debug stuff echo "sending email..."; exit; } ?> PHP is returning this error in the log: [29-Apr-2010 10:32:47] PHP Warning: POST Content-Length of 57885895 bytes exceeds the limit of 10485760 bytes in Unknown on line 0 Excuse all the debug stuff :) FTR, I am running PHP 5.1.2 on IIS. TIA Jared

    Read the article

  • Can't connect to SSL web service with WS-Security using PHP SOAP extension - certificate, complex WSDL

    - by BillF
    Using the PHP5 SOAP extension I have been unable to connect to a web service having an https endpoint, with client certificate and using WS-Security, although I can connect using soapUI with the exact same wsdl and client certificate, and obtain the normal response to the request. There is no HTTP authentication and no proxy is involved. The message I get is 'Could not connect to host'. Have been able to verify that I am NOT hitting the host server. (Earlier I wrongly said that I was hitting the server.) The self-signed client SSL certificate is a .pem file converted by openssl from a .p12 keystore which in turn was converted by keytool from a .jks keystore having a single entry consisting of private key and client certificate. In soapUI I did not need to supply a server private certificate, the only two files I gave it were the wdsl and pem. I did have to supply the pem and its passphrase to be able to connect. I am speculating that despite the error message my problem might actually be in the formation of the XML request rather than the SSL connection itself. The wsdl I have been given has nested complex types. The php server is on my Windows XP laptop with IIS. The code, data values and WSDL extracts are shown below. (The WSSoapClient class simply extends SoapClient, adding a WS-Security Username Token header with mustUnderstand = true and including a nonce, both of which the soapUI call had required.) Would so much appreciate any help. I'm a newbie thrown in at the deep end, and how! Have done vast amounts of Googling on this over many days, following many suggestions and have read Pro PHP by Kevin McArthur. An attempt to use classmaps in place of nested arrays also fell flat. The Code class STEeService { public function invokeWebService(array $connection, $operation, array $request) { try { $localCertificateFilespec = $connection['localCertificateFilespec']; $localCertificatePassphrase = $connection['localCertificatePassphrase']; $sslOptions = array( 'ssl' => array( 'local_cert' => $localCertificateFilespec, 'passphrase' => $localCertificatePassphrase, 'allow_self-signed' => true, 'verify_peer' => false ) ); $sslContext = stream_context_create($sslOptions); $clientArguments = array( 'stream_context' => $sslContext, 'local_cert' => $localCertificateFilespec, 'passphrase' => $localCertificatePassphrase, 'trace' => true, 'exceptions' => true, 'encoding' => 'UTF-8', 'soap_version' => SOAP_1_1 ); $oClient = new WSSoapClient($connection['wsdlFilespec'], $clientArguments); $oClient->__setUsernameToken($connection['username'], $connection['password']); return $oClient->__soapCall($operation, $request); } catch (exception $e) { throw new Exception("Exception in eServices " . $operation . " ," . $e->getMessage(), "\n"); } } } $connection is as follows: array(5) { ["username"]=> string(8) "DFU00050" ["password"]=> string(10) "Fabricate1" ["wsdlFilespec"]=> string (63) "c:/inetpub/wwwroot/DMZExternalService_Concrete_WSDL_Staging.xml" ["localCertificateFilespec"]=> string(37) "c:/inetpub/wwwroot/ClientKeystore.pem" ["localCertificatePassphrase"]=> string(14) "password123456" } $clientArguments is as follows: array(7) { ["stream_context"]=> resource(8) of type (stream-context) ["local_cert"]=> string(37) "c:/inetpub/wwwroot/ClientKeystore.pem" ["passphrase"]=> string(14) "password123456" ["trace"]=> bool(true) ["exceptions"]=> bool(true) ["encoding"]=> string(5) "UTF-8" ["soap_version"]=> int(1) } $operation is as follows: 'getConsignmentDetails' $request is as follows: array(1) { [0]=> array(2) { ["header"]=> array(2) { ["source"]=> string(9) "customerA" ["accountNo"]=> string(8) "10072906" } ["consignmentId"]=> string(11) "GKQ00000085" } } Note how there is an extra level of nesting, an array wrapping the request which is itself an array. This was suggested in a post although I don't see the reason, but it seems to help avoid other exceptions. The exception thrown by ___soapCall is as follows: object(SoapFault)#6 (9) { ["message":protected]=> string(25) "Could not connect to host" ["string":"Exception":private]=> string(0) "" ["code":protected]=> int(0) ["file":protected]=> string(43) "C:\Inetpub\wwwroot\eServices\WSSecurity.php" ["line":protected]=> int(85) ["trace":"Exception":private]=> array(5) { [0]=> array(6) { ["file"]=> string(43) "C:\Inetpub\wwwroot\eServices\WSSecurity.php" ["line"]=> int(85) ["function"]=> string(11) "__doRequest" ["class"]=> string(10) "SoapClient" ["type"]=> string(2) "->" ["args"]=> array(4) { [0]=> string(1240) " DFU00050 Fabricate1 E0ByMUA= 2010-10-28T13:13:52Z customerA10072906GKQ00000085 " [1]=> string(127) "https://services.startrackexpress.com.au:7560/DMZExternalService/InterfaceServices/ExternalOps.serviceagent/OperationsEndpoint1" [2]=> string(104) "/DMZExternalService/InterfaceServices/ExternalOps.serviceagent/OperationsEndpoint1/getConsignmentDetails" [3]=> int(1) } } [1]=> array(4) { ["function"]=> string(11) "__doRequest" ["class"]=> string(39) "startrackexpress\eservices\WSSoapClient" ["type"]=> string(2) "->" ["args"]=> array(5) { [0]=> string(1240) " DFU00050 Fabricate1 E0ByMUA= 2010-10-28T13:13:52Z customerA10072906GKQ00000085 " [1]=> string(127) "https://services.startrackexpress.com.au:7560/DMZExternalService/InterfaceServices/ExternalOps.serviceagent/OperationsEndpoint1" [2]=> string(104) "/DMZExternalService/InterfaceServices/ExternalOps.serviceagent/OperationsEndpoint1/getConsignmentDetails" [3]=> int(1) [4]=> int(0) } } [2]=> array(6) { ["file"]=> string(43) "C:\Inetpub\wwwroot\eServices\WSSecurity.php" ["line"]=> int(70) ["function"]=> string(10) "__soapCall" ["class"]=> string(10) "SoapClient" ["type"]=> string(2) "->" ["args"]=> array(4) { [0]=> string(21) "getConsignmentDetails" [1]=> array(1) { [0]=> array(2) { ["header"]=> array(2) { ["source"]=> string(9) "customerA" ["accountNo"]=> string(8) "10072906" } ["consignmentId"]=> string(11) "GKQ00000085" } } [2]=> NULL [3]=> object(SoapHeader)#5 (4) { ["namespace"]=> string(81) "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" ["name"]=> string(8) "Security" ["data"]=> object(SoapVar)#4 (2) { ["enc_type"]=> int(147) ["enc_value"]=> string(594) " DFU00050 Fabricate1 E0ByMUA= 2010-10-28T13:13:52Z " } ["mustUnderstand"]=> bool(true) } } } [3]=> array(6) { ["file"]=> string(42) "C:\Inetpub\wwwroot\eServices\eServices.php" ["line"]=> int(87) ["function"]=> string(10) "__soapCall" ["class"]=> string(39) "startrackexpress\eservices\WSSoapClient" ["type"]=> string(2) "->" ["args"]=> array(2) { [0]=> string(21) "getConsignmentDetails" [1]=> array(1) { [0]=> array(2) { ["header"]=> array(2) { ["source"]=> string(9) "customerA" ["accountNo"]=> string(8) "10072906" } ["consignmentId"]=> string(11) "GKQ00000085" } } } } [4]=> array(6) { ["file"]=> string(58) "C:\Inetpub\wwwroot\eServices\EnquireConsignmentDetails.php" ["line"]=> int(44) ["function"]=> string(16) "invokeWebService" ["class"]=> string(38) "startrackexpress\eservices\STEeService" ["type"]=> string(2) "->" ["args"]=> array(3) { [0]=> array(5) { ["username"]=> string(10) "DFU00050 " ["password"]=> string(12) "Fabricate1 " ["wsdlFilespec"]=> string(63) "c:/inetpub/wwwroot/DMZExternalService_Concrete_WSDL_Staging.xml" ["localCertificateFilespec"]=> string(37) "c:/inetpub/wwwroot/ClientKeystore.pem" ["localCertificatePassphrase"]=> string(14) "password123456" } [1]=> string(21) "getConsignmentDetails" [2]=> array(1) { [0]=> array(2) { ["header"]=> array(2) { ["source"]=> string(9) "customerA" ["accountNo"]=> string(8) "10072906" } ["consignmentId"]=> string(11) "GKQ00000085" } } } } } ["previous":"Exception":private]=> NULL ["faultstring"]=> string(25) "Could not connect to host" ["faultcode"]=> string(4) "HTTP" } Here are some WSDL extracts (TIBCO BusinessWorks): <xsd:complexType name="TransactionHeaderType"> <xsd:sequence> <xsd:element name="source" type="xsd:string"/> <xsd:element name="accountNo" type="xsd:integer"/> <xsd:element name="userId" type="xsd:string" minOccurs="0"/> <xsd:element name="transactionId" type="xsd:string" minOccurs="0"/> <xsd:element name="transactionDatetime" type="xsd:dateTime" minOccurs="0"/> </xsd:sequence> </xsd:complexType> <xsd:element name="getConsignmentDetailRequest"> <xsd:complexType> <xsd:sequence> <xsd:element name="header" type="prim:TransactionHeaderType"/> <xsd:element name="consignmentId" type="prim:ID" maxOccurs="unbounded"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="getConsignmentDetailResponse"> <xsd:complexType> <xsd:sequence> <xsd:element name="consignment" type="freight:consignmentType" minOccurs="0" maxOccurs="unbounded"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="getConsignmentDetailRequest"> <xsd:complexType> <xsd:sequence> <xsd:element name="header" type="prim:TransactionHeaderType"/> <xsd:element name="consignmentId" type="prim:ID" maxOccurs="unbounded"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="getConsignmentDetailResponse"> <xsd:complexType> <xsd:sequence> <xsd:element name="consignment" type="freight:consignmentType" minOccurs="0" maxOccurs="unbounded"/> </xsd:sequence> </xsd:complexType> </xsd:element> <wsdl:operation name="getConsignmentDetails"> <wsdl:input message="tns:getConsignmentDetailsRequest"/> <wsdl:output message="tns:getConsignmentDetailsResponse"/> <wsdl:fault name="fault1" message="tns:fault"/> </wsdl:operation> <wsdl:service name="ExternalOps"> <wsdl:port name="OperationsEndpoint1" binding="tns:OperationsEndpoint1Binding"> <soap:address location="https://services.startrackexpress.com.au:7560/DMZExternalService/InterfaceServices/ExternalOps.serviceagent/OperationsEndpoint1"/> </wsdl:port> </wsdl:service> And here in case it's relevant is the WSSoapClient class: <?PHP namespace startrackexpress\eservices; use SoapClient, SoapVar, SoapHeader; class WSSoapClient extends SoapClient { private $username; private $password; /*Generates a WS-Security header*/ private function wssecurity_header() { $timestamp = gmdate('Y-m-d\TH:i:s\Z'); $nonce = mt_rand(); $passdigest = base64_encode(pack('H*', sha1(pack('H*', $nonce).pack('a*', $timestamp).pack('a*', $this->password)))); $auth = ' <wsse:Security SOAP-ENV:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken> <wsse:Username>' . $this->username . '</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">' . $this->password . '</wsse:Password> <wsse:Nonce>' . base64_encode(pack('H*', $nonce)).'</wsse:Nonce> <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">' . $timestamp . '</wsu:Created> </wsse:UsernameToken> </wsse:Security> '; $authvalues = new SoapVar($auth, XSD_ANYXML); $header = new SoapHeader("http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd", "Security",$authvalues, true); return $header; } // Sets a username and passphrase public function __setUsernameToken($username,$password) { $this->username=$username; $this->password=$password; } // Overwrites the original method, adding the security header public function __soapCall($function_name, $arguments, $options=null, $input_headers=null, $output_headers=null) { try { $result = parent::__soapCall($function_name, $arguments, $options, $this->wssecurity_header()); return $result; } catch (exception $e) { throw new Exception("Exception in __soapCall, " . $e->getMessage(), "\n"); } } } ?> Update: The request XML would have been as follows: <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://startrackexpress/Common/Primitives/v1" xmlns:ns2="http://startrackexpress/Common/actions/externals/Consignment/v1" xmlns:ns3="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <SOAP-ENV:Header> <wsse:Security SOAP-ENV:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken> <wsse:Username>DFU00050</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">Fabricate1</wsse:Password> <wsse:Nonce>M4FIeGA=</wsse:Nonce> <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2010-10-29T14:05:27Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </SOAP-ENV:Header> <SOAP-ENV:Body><ns2:getConsignmentDetailRequest> <ns2:header><ns1:source>customerA</ns1:source><ns1:accountNo>10072906</ns1:accountNo></ns2:header> <ns2:consignmentId>GKQ00000085</ns2:consignmentId> </ns2:getConsignmentDetailRequest></SOAP-ENV:Body> </SOAP-ENV:Envelope> This was obtained with the following code in WSSoapClient: public function __doRequest($request, $location, $action, $version) { echo "<p> " . htmlspecialchars($request) . " </p>" ; return parent::__doRequest($request, $location, $action, $version); }

    Read the article

  • .NET Winform AJAX Login Services

    - by AdamSane
    I am working on a Windows Form that connects to a ASP.NET membership database and I am trying to use the AJAX Login Service. No matter what I do I keep on getting 404 errors on the Authentication_JSON_AppService.axd call. Web Config Below <?xml version="1.0"?> <!-- Note: As an alternative to hand editing this file you can use the web admin tool to configure settings for your application. Use the Website->Asp.Net Configuration option in Visual Studio. A full list of settings and comments can be found in machine.config.comments usually located in \Windows\Microsoft.Net\Framework\v2.x\Config --> <configuration> <configSections> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere"/> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <connectionStrings <!-- Removed --> /> <appSettings/> <system.web> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <compilation debug="true"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> </assemblies> </compilation> <membership defaultProvider="dbSqlMembershipProvider"> <providers> <add name="dbSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="Fire.Common.Properties.Settings.dbFireConnectionString" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" applicationName="/" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="7" minRequiredNonalphanumericCharacters="1" passwordAttemptWindow="10" passwordStrengthRegularExpression=""/> </providers> </membership> <roleManager enabled="true" defaultProvider="dbSqlRoleProvider"> <providers> <add connectionStringName="Fire.Common.Properties.Settings.dbFireConnectionString" applicationName="/" name="dbSqlRoleProvider" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> </providers> </roleManager> <authentication mode="Forms"> <forms loginUrl="Login.aspx" cookieless="UseCookies" protection="All" timeout="30" requireSSL="false" slidingExpiration="true" defaultUrl="default.aspx" enableCrossAppRedirects="false"/> </authentication> <authorization> <allow users="*"/> <allow users="?"/> </authorization> <customErrors mode="Off"> </customErrors> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" validate="false" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpModules> </system.web> <location path="~/Admin"> <system.web> <authorization> <allow roles="Admin"/> <allow roles="System"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Admin/System"> <system.web> <authorization> <allow roles="System"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Export"> <system.web> <authorization> <allow roles="Export"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Field"> <system.web> <authorization> <allow roles="Field"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Default.aspx"> <system.web> <authorization> <allow roles="Admin"/> <allow roles="System"/> <allow roles="Export"/> <allow roles="Field"/> <deny users="?"/> </authorization> </system.web> </location> <location path="~/Login.aspx"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/App_Themes"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/Includes"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/WebServices"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/Authentication_JSON_AppService.axd"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" type="Microsoft.CSharp.CSharpCodeProvider,System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" warningLevel="4"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> <compiler language="vb;vbs;visualbasic;vbscript" extension=".vb" type="Microsoft.VisualBasic.VBCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" warningLevel="4"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="OptionInfer" value="true"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <remove name="ScriptModule"/> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <remove name="ScriptHandlerFactory"/> <remove name="ScriptHandlerFactoryAppServices"/> <remove name="ScriptResource"/> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" verb="GET,HEAD" path="ScriptResource.axd" preCondition="integratedMode" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </handlers> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> </assemblyBinding> </runtime> </configuration>

    Read the article

< Previous Page | 306 307 308 309 310