Search Results

Search found 20484 results on 820 pages for 'small projects'.

Page 815/820 | < Previous Page | 811 812 813 814 815 816 817 818 819 820  | Next Page >

  • Azure Mobile Services: what files does it consist of?

    - by svdoever
    Azure Mobile Services is a platform that provides a small set of functionality consisting of authentication, custom data tables, custom API’s, scheduling scripts and push notifications to be used as the back-end of a mobile application or if you want, any application or web site. As described in my previous post Azure Mobile Services: lessons learned the documentation on what can be used in the custom scripts is a bit minimalistic. The list below of all files the complete Azure Mobile Services platform consists of ca shed some light on what is available in the platform. In following posts I will provide more detailed information on what we can conclude from this list of files. Below are the available files as available in the Azure Mobile Services platform. The bold files are files that describe your data model, api scripts, scheduler scripts and table scripts. Those are the files you configure/construct to provide the “configuration”/implementation of you mobile service. The files are located in a folder like C:\DWASFiles\Sites\youreservice\VirtualDirectory0\site\wwwroot. One file is missing in the list below and that is the event log file C:\DWASFiles\Sites\youreservice\VirtualDirectory0\site\LogFiles\eventlog.xml where your messages written with for example console.log() and exception catched by the system are written. NOTA BENE: the Azure Mobile Services system is a system that is under full development, new releases may change the list of files. ./app.js ./App_Data/config/datamodel.json ./App_Data/config/scripts/api/youreapi.js ./App_Data/config/scripts/api/youreapi.json ./App_Data/config/scripts/scheduler/placeholder ./App_Data/config/scripts/scheduler/youresheduler.js ./App_Data/config/scripts/shared/placeholder ./App_Data/config/scripts/table/placeholder ./App_Data/config/scripts/table/yourtable.insert.js ./App_Data/config/scripts/table/yourtable.update.js ./App_Data/config/scripts/table/yourtable.delete.js ./App_Data/config/scripts/table/yourtable.read.js ./node_modules/apn/index.js ./node_modules/apn/lib/connection.js ./node_modules/apn/lib/device.js ./node_modules/apn/lib/errors.js ./node_modules/apn/lib/feedback.js ./node_modules/apn/lib/notification.js ./node_modules/apn/lib/util.js ./node_modules/apn/node_modules/q/package.json ./node_modules/apn/node_modules/q/q.js ./node_modules/apn/package.json ./node_modules/azure/lib/azure.js ./node_modules/azure/lib/cli/blobUtils.js ./node_modules/azure/lib/cli/cacheUtils.js ./node_modules/azure/lib/cli/callbackAggregator.js ./node_modules/azure/lib/cli/cert.js ./node_modules/azure/lib/cli/channel.js ./node_modules/azure/lib/cli/cli.js ./node_modules/azure/lib/cli/commands/account.js ./node_modules/azure/lib/cli/commands/config.js ./node_modules/azure/lib/cli/commands/deployment.js ./node_modules/azure/lib/cli/commands/deployment_.js ./node_modules/azure/lib/cli/commands/help.js ./node_modules/azure/lib/cli/commands/log.js ./node_modules/azure/lib/cli/commands/log_.js ./node_modules/azure/lib/cli/commands/repository.js ./node_modules/azure/lib/cli/commands/repository_.js ./node_modules/azure/lib/cli/commands/service.js ./node_modules/azure/lib/cli/commands/site.js ./node_modules/azure/lib/cli/commands/site_.js ./node_modules/azure/lib/cli/commands/vm.js ./node_modules/azure/lib/cli/common.js ./node_modules/azure/lib/cli/constants.js ./node_modules/azure/lib/cli/generate-psm1-utils.js ./node_modules/azure/lib/cli/generate-psm1.js ./node_modules/azure/lib/cli/iaas/blobserviceex.js ./node_modules/azure/lib/cli/iaas/deleteImage.js ./node_modules/azure/lib/cli/iaas/image.js ./node_modules/azure/lib/cli/iaas/upload/blobInfo.js ./node_modules/azure/lib/cli/iaas/upload/bufferStream.js ./node_modules/azure/lib/cli/iaas/upload/intSet.js ./node_modules/azure/lib/cli/iaas/upload/jobTracker.js ./node_modules/azure/lib/cli/iaas/upload/pageBlob.js ./node_modules/azure/lib/cli/iaas/upload/streamMerger.js ./node_modules/azure/lib/cli/iaas/upload/uploadVMImage.js ./node_modules/azure/lib/cli/iaas/upload/vhdTools.js ./node_modules/azure/lib/cli/keyFiles.js ./node_modules/azure/lib/cli/patch-winston.js ./node_modules/azure/lib/cli/templates/node/iisnode.yml ./node_modules/azure/lib/cli/utils.js ./node_modules/azure/lib/diagnostics/logger.js ./node_modules/azure/lib/http/webresource.js ./node_modules/azure/lib/serviceruntime/fileinputchannel.js ./node_modules/azure/lib/serviceruntime/goalstatedeserializer.js ./node_modules/azure/lib/serviceruntime/namedpipeinputchannel.js ./node_modules/azure/lib/serviceruntime/namedpipeoutputchannel.js ./node_modules/azure/lib/serviceruntime/protocol1runtimeclient.js ./node_modules/azure/lib/serviceruntime/protocol1runtimecurrentstateclient.js ./node_modules/azure/lib/serviceruntime/protocol1runtimegoalstateclient.js ./node_modules/azure/lib/serviceruntime/roleenvironment.js ./node_modules/azure/lib/serviceruntime/runtimekernel.js ./node_modules/azure/lib/serviceruntime/runtimeversionmanager.js ./node_modules/azure/lib/serviceruntime/runtimeversionprotocolclient.js ./node_modules/azure/lib/serviceruntime/xmlcurrentstateserializer.js ./node_modules/azure/lib/serviceruntime/xmlgoalstatedeserializer.js ./node_modules/azure/lib/serviceruntime/xmlroleenvironmentdatadeserializer.js ./node_modules/azure/lib/services/blob/blobservice.js ./node_modules/azure/lib/services/blob/hmacsha256sign.js ./node_modules/azure/lib/services/blob/models/blobresult.js ./node_modules/azure/lib/services/blob/models/blocklistresult.js ./node_modules/azure/lib/services/blob/models/containeraclresult.js ./node_modules/azure/lib/services/blob/models/containerresult.js ./node_modules/azure/lib/services/blob/models/leaseresult.js ./node_modules/azure/lib/services/blob/models/listblobsresultcontinuation.js ./node_modules/azure/lib/services/blob/models/listcontainersresultcontinuation.js ./node_modules/azure/lib/services/blob/models/servicepropertiesresult.js ./node_modules/azure/lib/services/blob/sharedaccesssignature.js ./node_modules/azure/lib/services/blob/sharedkey.js ./node_modules/azure/lib/services/blob/sharedkeylite.js ./node_modules/azure/lib/services/core/connectionstringparser.js ./node_modules/azure/lib/services/core/exponentialretrypolicyfilter.js ./node_modules/azure/lib/services/core/linearretrypolicyfilter.js ./node_modules/azure/lib/services/core/servicebusserviceclient.js ./node_modules/azure/lib/services/core/servicebussettings.js ./node_modules/azure/lib/services/core/serviceclient.js ./node_modules/azure/lib/services/core/servicemanagementclient.js ./node_modules/azure/lib/services/core/servicemanagementsettings.js ./node_modules/azure/lib/services/core/servicesettings.js ./node_modules/azure/lib/services/core/storageserviceclient.js ./node_modules/azure/lib/services/core/storageservicesettings.js ./node_modules/azure/lib/services/queue/models/listqueuesresultcontinuation.js ./node_modules/azure/lib/services/queue/models/queuemessageresult.js ./node_modules/azure/lib/services/queue/models/queueresult.js ./node_modules/azure/lib/services/queue/models/servicepropertiesresult.js ./node_modules/azure/lib/services/queue/queueservice.js ./node_modules/azure/lib/services/serviceBus/models/acstokenresult.js ./node_modules/azure/lib/services/serviceBus/models/queuemessageresult.js ./node_modules/azure/lib/services/serviceBus/models/queueresult.js ./node_modules/azure/lib/services/serviceBus/models/ruleresult.js ./node_modules/azure/lib/services/serviceBus/models/subscriptionresult.js ./node_modules/azure/lib/services/serviceBus/models/topicresult.js ./node_modules/azure/lib/services/serviceBus/servicebusservice.js ./node_modules/azure/lib/services/serviceBus/wrap.js ./node_modules/azure/lib/services/serviceBus/wrapservice.js ./node_modules/azure/lib/services/serviceBus/wraptokenmanager.js ./node_modules/azure/lib/services/serviceManagement/models/roleparser.js ./node_modules/azure/lib/services/serviceManagement/models/roleschema.json ./node_modules/azure/lib/services/serviceManagement/models/servicemanagementserialize.js ./node_modules/azure/lib/services/serviceManagement/servicemanagementservice.js ./node_modules/azure/lib/services/table/batchserviceclient.js ./node_modules/azure/lib/services/table/models/entityresult.js ./node_modules/azure/lib/services/table/models/queryentitiesresultcontinuation.js ./node_modules/azure/lib/services/table/models/querytablesresultcontinuation.js ./node_modules/azure/lib/services/table/models/servicepropertiesresult.js ./node_modules/azure/lib/services/table/models/tableresult.js ./node_modules/azure/lib/services/table/sharedkeylitetable.js ./node_modules/azure/lib/services/table/sharedkeytable.js ./node_modules/azure/lib/services/table/tablequery.js ./node_modules/azure/lib/services/table/tableservice.js ./node_modules/azure/lib/util/atomhandler.js ./node_modules/azure/lib/util/certificates/der.js ./node_modules/azure/lib/util/certificates/pkcs.js ./node_modules/azure/lib/util/constants.js ./node_modules/azure/lib/util/iso8061date.js ./node_modules/azure/lib/util/js2xml.js ./node_modules/azure/lib/util/rfc1123date.js ./node_modules/azure/lib/util/util.js ./node_modules/azure/lib/util/validate.js ./node_modules/azure/LICENSE.txt ./node_modules/azure/node_modules/async/index.js ./node_modules/azure/node_modules/async/lib/async.js ./node_modules/azure/node_modules/async/LICENSE ./node_modules/azure/node_modules/async/package.json ./node_modules/azure/node_modules/azure/lib/azure.js ./node_modules/azure/node_modules/azure/lib/diagnostics/logger.js ./node_modules/azure/node_modules/azure/lib/http/webresource.js ./node_modules/azure/node_modules/azure/lib/serviceruntime/fileinputchannel.js ./node_modules/azure/node_modules/azure/lib/serviceruntime/goalstatedeserializer.js ./node_modules/azure/node_modules/azure/lib/serviceruntime/namedpipeinputchannel.js ./node_modules/azure/node_modules/azure/lib/serviceruntime/namedpipeoutputchannel.js ./node_modules/azure/node_modules/azure/lib/serviceruntime/protocol1runtimeclient.js ./node_modules/azure/node_modules/azure/lib/serviceruntime/protocol1runtimecurrentstateclient.js ./node_modules/azure/node_modules/azure/lib/serviceruntime/protocol1runtimegoalstateclient.js ./node_modules/azure/node_modules/azure/lib/serviceruntime/roleenvironment.js ./node_modules/azure/node_modules/azure/lib/serviceruntime/runtimekernel.js ./node_modules/azure/node_modules/azure/lib/serviceruntime/runtimeversionmanager.js ./node_modules/azure/node_modules/azure/lib/serviceruntime/runtimeversionprotocolclient.js ./node_modules/azure/node_modules/azure/lib/serviceruntime/xmlcurrentstateserializer.js ./node_modules/azure/node_modules/azure/lib/serviceruntime/xmlgoalstatedeserializer.js ./node_modules/azure/node_modules/azure/lib/serviceruntime/xmlroleenvironmentdatadeserializer.js ./node_modules/azure/node_modules/azure/lib/services/blob/blobservice.js ./node_modules/azure/node_modules/azure/lib/services/blob/internal/sharedaccesssignature.js ./node_modules/azure/node_modules/azure/lib/services/blob/internal/sharedkey.js ./node_modules/azure/node_modules/azure/lib/services/blob/internal/sharedkeylite.js ./node_modules/azure/node_modules/azure/lib/services/blob/models/blobresult.js ./node_modules/azure/node_modules/azure/lib/services/blob/models/blocklistresult.js ./node_modules/azure/node_modules/azure/lib/services/blob/models/containeraclresult.js ./node_modules/azure/node_modules/azure/lib/services/blob/models/containerresult.js ./node_modules/azure/node_modules/azure/lib/services/blob/models/leaseresult.js ./node_modules/azure/node_modules/azure/lib/services/blob/models/listblobsresultcontinuation.js ./node_modules/azure/node_modules/azure/lib/services/blob/models/listcontainersresultcontinuation.js ./node_modules/azure/node_modules/azure/lib/services/blob/models/servicepropertiesresult.js ./node_modules/azure/node_modules/azure/lib/services/core/connectionstringparser.js ./node_modules/azure/node_modules/azure/lib/services/core/exponentialretrypolicyfilter.js ./node_modules/azure/node_modules/azure/lib/services/core/hmacsha256sign.js ./node_modules/azure/node_modules/azure/lib/services/core/linearretrypolicyfilter.js ./node_modules/azure/node_modules/azure/lib/services/core/servicebusserviceclient.js ./node_modules/azure/node_modules/azure/lib/services/core/servicebussettings.js ./node_modules/azure/node_modules/azure/lib/services/core/serviceclient.js ./node_modules/azure/node_modules/azure/lib/services/core/servicemanagementclient.js ./node_modules/azure/node_modules/azure/lib/services/core/servicemanagementsettings.js ./node_modules/azure/node_modules/azure/lib/services/core/servicesettings.js ./node_modules/azure/node_modules/azure/lib/services/core/sqlserviceclient.js ./node_modules/azure/node_modules/azure/lib/services/core/storageserviceclient.js ./node_modules/azure/node_modules/azure/lib/services/core/storageservicesettings.js ./node_modules/azure/node_modules/azure/lib/services/queue/models/listqueuesresultcontinuation.js ./node_modules/azure/node_modules/azure/lib/services/queue/models/queuemessageresult.js ./node_modules/azure/node_modules/azure/lib/services/queue/models/queueresult.js ./node_modules/azure/node_modules/azure/lib/services/queue/models/servicepropertiesresult.js ./node_modules/azure/node_modules/azure/lib/services/queue/queueservice.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/apnsservice.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/gcmservice.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/internal/sharedaccesssignature.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/internal/wrap.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/internal/wraptokenmanager.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/models/acstokenresult.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/models/notificationhubresult.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/models/queuemessageresult.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/models/queueresult.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/models/registrationresult.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/models/resourceresult.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/models/ruleresult.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/models/subscriptionresult.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/models/topicresult.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/notificationhubservice.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/servicebusservice.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/servicebusservicebase.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/wnsservice.js ./node_modules/azure/node_modules/azure/lib/services/serviceBus/wrapservice.js ./node_modules/azure/node_modules/azure/lib/services/serviceManagement/hdinsightservice.js ./node_modules/azure/node_modules/azure/lib/services/serviceManagement/models/roleparser.js ./node_modules/azure/node_modules/azure/lib/services/serviceManagement/models/roleschema.json ./node_modules/azure/node_modules/azure/lib/services/serviceManagement/models/servicemanagementserialize.js ./node_modules/azure/node_modules/azure/lib/services/serviceManagement/servicebusmanagementservice.js ./node_modules/azure/node_modules/azure/lib/services/serviceManagement/servicemanagementservice.js ./node_modules/azure/node_modules/azure/lib/services/serviceManagement/sqlmanagementservice.js ./node_modules/azure/node_modules/azure/lib/services/sqlAzure/models/databaseresult.js ./node_modules/azure/node_modules/azure/lib/services/sqlAzure/sqlserveracs.js ./node_modules/azure/node_modules/azure/lib/services/sqlAzure/sqlservice.js ./node_modules/azure/node_modules/azure/lib/services/table/batchserviceclient.js ./node_modules/azure/node_modules/azure/lib/services/table/internal/sharedkeylitetable.js ./node_modules/azure/node_modules/azure/lib/services/table/internal/sharedkeytable.js ./node_modules/azure/node_modules/azure/lib/services/table/models/entityresult.js ./node_modules/azure/node_modules/azure/lib/services/table/models/listresult.js ./node_modules/azure/node_modules/azure/lib/services/table/models/queryentitiesresultcontinuation.js ./node_modules/azure/node_modules/azure/lib/services/table/models/querytablesresultcontinuation.js ./node_modules/azure/node_modules/azure/lib/services/table/models/servicepropertiesresult.js ./node_modules/azure/node_modules/azure/lib/services/table/models/tableresult.js ./node_modules/azure/node_modules/azure/lib/services/table/tablequery.js ./node_modules/azure/node_modules/azure/lib/services/table/tableservice.js ./node_modules/azure/node_modules/azure/lib/util/atomhandler.js ./node_modules/azure/node_modules/azure/lib/util/constants.js ./node_modules/azure/node_modules/azure/lib/util/date.js ./node_modules/azure/node_modules/azure/lib/util/edmtype.js ./node_modules/azure/node_modules/azure/lib/util/iso8061date.js ./node_modules/azure/node_modules/azure/lib/util/js2xml.js ./node_modules/azure/node_modules/azure/lib/util/odatahandler.js ./node_modules/azure/node_modules/azure/lib/util/rfc1123date.js ./node_modules/azure/node_modules/azure/lib/util/util.js ./node_modules/azure/node_modules/azure/lib/util/validate.js ./node_modules/azure/node_modules/azure/LICENSE.txt ./node_modules/azure/node_modules/azure/node_modules/wns/lib/wns.js ./node_modules/azure/node_modules/azure/node_modules/wns/LICENSE.txt ./node_modules/azure/node_modules/azure/node_modules/wns/package.json ./node_modules/azure/node_modules/azure/node_modules/xml2js/lib/xml2js.js ./node_modules/azure/node_modules/azure/node_modules/xml2js/LICENSE ./node_modules/azure/node_modules/azure/node_modules/xml2js/node_modules/sax/lib/sax.js ./node_modules/azure/node_modules/azure/node_modules/xml2js/node_modules/sax/LICENSE ./node_modules/azure/node_modules/azure/node_modules/xml2js/node_modules/sax/package.json ./node_modules/azure/node_modules/azure/node_modules/xml2js/package.json ./node_modules/azure/node_modules/azure/package.json ./node_modules/azure/node_modules/colors/colors.js ./node_modules/azure/node_modules/colors/MIT-LICENSE.txt ./node_modules/azure/node_modules/colors/package.json ./node_modules/azure/node_modules/commander/index.js ./node_modules/azure/node_modules/commander/lib/commander.js ./node_modules/azure/node_modules/commander/node_modules/keypress/index.js ./node_modules/azure/node_modules/commander/node_modules/keypress/package.json ./node_modules/azure/node_modules/commander/package.json ./node_modules/azure/node_modules/dateformat/lib/dateformat.js ./node_modules/azure/node_modules/dateformat/package.json ./node_modules/azure/node_modules/easy-table/lib/table.js ./node_modules/azure/node_modules/easy-table/package.json ./node_modules/azure/node_modules/eyes/lib/eyes.js ./node_modules/azure/node_modules/eyes/LICENSE ./node_modules/azure/node_modules/eyes/package.json ./node_modules/azure/node_modules/log/index.js ./node_modules/azure/node_modules/log/lib/log.js ./node_modules/azure/node_modules/log/package.json ./node_modules/azure/node_modules/mime/LICENSE ./node_modules/azure/node_modules/mime/mime.js ./node_modules/azure/node_modules/mime/package.json ./node_modules/azure/node_modules/mime/types/mime.types ./node_modules/azure/node_modules/mime/types/node.types ./node_modules/azure/node_modules/node-uuid/LICENSE.md ./node_modules/azure/node_modules/node-uuid/package.json ./node_modules/azure/node_modules/node-uuid/uuid.js ./node_modules/azure/node_modules/qs/component.json ./node_modules/azure/node_modules/qs/index.js ./node_modules/azure/node_modules/qs/lib/head.js ./node_modules/azure/node_modules/qs/lib/querystring.js ./node_modules/azure/node_modules/qs/lib/tail.js ./node_modules/azure/node_modules/qs/package.json ./node_modules/azure/node_modules/qs/querystring.js ./node_modules/azure/node_modules/request/aws.js ./node_modules/azure/node_modules/request/forever.js ./node_modules/azure/node_modules/request/LICENSE ./node_modules/azure/node_modules/request/main.js ./node_modules/azure/node_modules/request/node_modules/form-data/lib/form_data.js ./node_modules/azure/node_modules/request/node_modules/form-data/node_modules/async/index.js ./node_modules/azure/node_modules/request/node_modules/form-data/node_modules/async/lib/async.js ./node_modules/azure/node_modules/request/node_modules/form-data/node_modules/async/LICENSE ./node_modules/azure/node_modules/request/node_modules/form-data/node_modules/async/package.json ./node_modules/azure/node_modules/request/node_modules/form-data/node_modules/combined-stream/lib/combined_stream.js ./node_modules/azure/node_modules/request/node_modules/form-data/node_modules/combined-stream/License ./node_modules/azure/node_modules/request/node_modules/form-data/node_modules/combined-stream/node_modules/delayed-stream/lib/delayed_stream.js ./node_modules/azure/node_modules/request/node_modules/form-data/node_modules/combined-stream/node_modules/delayed-stream/License ./node_modules/azure/node_modules/request/node_modules/form-data/node_modules/combined-stream/node_modules/delayed-stream/package.json ./node_modules/azure/node_modules/request/node_modules/form-data/node_modules/combined-stream/package.json ./node_modules/azure/node_modules/request/node_modules/form-data/package.json ./node_modules/azure/node_modules/request/node_modules/mime/LICENSE ./node_modules/azure/node_modules/request/node_modules/mime/mime.js ./node_modules/azure/node_modules/request/node_modules/mime/package.json ./node_modules/azure/node_modules/request/node_modules/mime/types/mime.types ./node_modules/azure/node_modules/request/node_modules/mime/types/node.types ./node_modules/azure/node_modules/request/oauth.js ./node_modules/azure/node_modules/request/package.json ./node_modules/azure/node_modules/request/tunnel.js ./node_modules/azure/node_modules/request/uuid.js ./node_modules/azure/node_modules/request/vendor/cookie/index.js ./node_modules/azure/node_modules/request/vendor/cookie/jar.js ./node_modules/azure/node_modules/sax/lib/sax.js ./node_modules/azure/node_modules/sax/LICENSE ./node_modules/azure/node_modules/sax/package.json ./node_modules/azure/node_modules/streamline/AUTHORS ./node_modules/azure/node_modules/streamline/deps/narcissus/lib/decompiler.js ./node_modules/azure/node_modules/streamline/deps/narcissus/lib/definitions.js ./node_modules/azure/node_modules/streamline/deps/narcissus/lib/jsbrowser.js ./node_modules/azure/node_modules/streamline/deps/narcissus/lib/jsdecomp.js ./node_modules/azure/node_modules/streamline/deps/narcissus/lib/jsdefs.js ./node_modules/azure/node_modules/streamline/deps/narcissus/lib/jsexec.js ./node_modules/azure/node_modules/streamline/deps/narcissus/lib/jslex.js ./node_modules/azure/node_modules/streamline/deps/narcissus/lib/jsparse.js ./node_modules/azure/node_modules/streamline/deps/narcissus/lib/lexer.js ./node_modules/azure/node_modules/streamline/deps/narcissus/lib/parser.js ./node_modules/azure/node_modules/streamline/deps/narcissus/LICENSE ./node_modules/azure/node_modules/streamline/deps/narcissus/main.js ./node_modules/azure/node_modules/streamline/deps/narcissus/package.json ./node_modules/azure/node_modules/streamline/deps/narcissus/xfail/narcissus-failures.txt ./node_modules/azure/node_modules/streamline/deps/narcissus/xfail/narcissus-slow.txt ./node_modules/azure/node_modules/streamline/lib/callbacks/format.js ./node_modules/azure/node_modules/streamline/lib/callbacks/require-stub.js ./node_modules/azure/node_modules/streamline/lib/callbacks/runtime.js ./node_modules/azure/node_modules/streamline/lib/callbacks/transform.js ./node_modules/azure/node_modules/streamline/lib/compile.js ./node_modules/azure/node_modules/streamline/lib/compiler/command.js ./node_modules/azure/node_modules/streamline/lib/compiler/compile--fibers.js ./node_modules/azure/node_modules/streamline/lib/compiler/compile.js ./node_modules/azure/node_modules/streamline/lib/compiler/compile_.js ./node_modules/azure/node_modules/streamline/lib/compiler/index.js ./node_modules/azure/node_modules/streamline/lib/compiler/register.js ./node_modules/azure/node_modules/streamline/lib/fibers/runtime.js ./node_modules/azure/node_modules/streamline/lib/fibers/transform.js ./node_modules/azure/node_modules/streamline/lib/fibers/walker.js ./node_modules/azure/node_modules/streamline/lib/globals.js ./node_modules/azure/node_modules/streamline/lib/index.js ./node_modules/azure/node_modules/streamline/lib/register.js ./node_modules/azure/node_modules/streamline/lib/require/client/require.js ./node_modules/azure/node_modules/streamline/lib/require/server/depend.js ./node_modules/azure/node_modules/streamline/lib/require/server/require.js ./node_modules/azure/node_modules/streamline/lib/streams/client/streams--fibers.js ./node_modules/azure/node_modules/streamline/lib/streams/client/streams.js ./node_modules/azure/node_modules/streamline/lib/streams/client/streams_.js ./node_modules/azure/node_modules/streamline/lib/streams/jsonRequest.js ./node_modules/azure/node_modules/streamline/lib/streams/readers.js ./node_modules/azure/node_modules/streamline/lib/streams/server/httpHelper.js ./node_modules/azure/node_modules/streamline/lib/streams/server/streams.js ./node_modules/azure/node_modules/streamline/lib/streams/streams.js ./node_modules/azure/node_modules/streamline/lib/tools/docTool.js ./node_modules/azure/node_modules/streamline/lib/transform.js ./node_modules/azure/node_modules/streamline/lib/util/flows--fibers.js ./node_modules/azure/node_modules/streamline/lib/util/flows.js ./node_modules/azure/node_modules/streamline/lib/util/flows_.js ./node_modules/azure/node_modules/streamline/lib/util/future.js ./node_modules/azure/node_modules/streamline/lib/util/index.js ./node_modules/azure/node_modules/streamline/lib/util/url.js ./node_modules/azure/node_modules/streamline/lib/util/uuid.js ./node_modules/azure/node_modules/streamline/module.js ./node_modules/azure/node_modules/streamline/package.json ./node_modules/azure/node_modules/tunnel/index.js ./node_modules/azure/node_modules/tunnel/lib/tunnel.js ./node_modules/azure/node_modules/tunnel/package.json ./node_modules/azure/node_modules/underscore/index.js ./node_modules/azure/node_modules/underscore/LICENSE ./node_modules/azure/node_modules/underscore/package.json ./node_modules/azure/node_modules/underscore/underscore.js ./node_modules/azure/node_modules/underscore.string/lib/underscore.string.js ./node_modules/azure/node_modules/underscore.string/package.json ./node_modules/azure/node_modules/validator/index.js ./node_modules/azure/node_modules/validator/lib/defaultError.js ./node_modules/azure/node_modules/validator/lib/entities.js ./node_modules/azure/node_modules/validator/lib/filter.js ./node_modules/azure/node_modules/validator/lib/index.js ./node_modules/azure/node_modules/validator/lib/validator.js ./node_modules/azure/node_modules/validator/lib/validators.js ./node_modules/azure/node_modules/validator/lib/xss.js ./node_modules/azure/node_modules/validator/LICENSE ./node_modules/azure/node_modules/validator/package.json ./node_modules/azure/node_modules/validator/validator.js ./node_modules/azure/node_modules/winston/lib/winston/common.js ./node_modules/azure/node_modules/winston/lib/winston/config/cli-config.js ./node_modules/azure/node_modules/winston/lib/winston/config/npm-config.js ./node_modules/azure/node_modules/winston/lib/winston/config/syslog-config.js ./node_modules/azure/node_modules/winston/lib/winston/config.js ./node_modules/azure/node_modules/winston/lib/winston/container.js ./node_modules/azure/node_modules/winston/lib/winston/exception.js ./node_modules/azure/node_modules/winston/lib/winston/logger.js ./node_modules/azure/node_modules/winston/lib/winston/transports/console.js ./node_modules/azure/node_modules/winston/lib/winston/transports/file.js ./node_modules/azure/node_modules/winston/lib/winston/transports/http.js ./node_modules/azure/node_modules/winston/lib/winston/transports/transport.js ./node_modules/azure/node_modules/winston/lib/winston/transports/webhook.js ./node_modules/azure/node_modules/winston/lib/winston/transports.js ./node_modules/azure/node_modules/winston/lib/winston.js ./node_modules/azure/node_modules/winston/LICENSE ./node_modules/azure/node_modules/winston/node_modules/cycle/cycle.js ./node_modules/azure/node_modules/winston/node_modules/cycle/package.json ./node_modules/azure/node_modules/winston/node_modules/pkginfo/lib/pkginfo.js ./node_modules/azure/node_modules/winston/node_modules/pkginfo/package.json ./node_modules/azure/node_modules/winston/node_modules/request/aws.js ./node_modules/azure/node_modules/winston/node_modules/request/aws2.js ./node_modules/azure/node_modules/winston/node_modules/request/forever.js ./node_modules/azure/node_modules/winston/node_modules/request/LICENSE ./node_modules/azure/node_modules/winston/node_modules/request/main.js ./node_modules/azure/node_modules/winston/node_modules/request/mimetypes.js ./node_modules/azure/node_modules/winston/node_modules/request/oauth.js ./node_modules/azure/node_modules/winston/node_modules/request/package.json ./node_modules/azure/node_modules/winston/node_modules/request/tunnel.js ./node_modules/azure/node_modules/winston/node_modules/request/uuid.js ./node_modules/azure/node_modules/winston/node_modules/request/vendor/cookie/index.js ./node_modules/azure/node_modules/winston/node_modules/request/vendor/cookie/jar.js ./node_modules/azure/node_modules/winston/node_modules/stack-trace/lib/stack-trace.js ./node_modules/azure/node_modules/winston/node_modules/stack-trace/License ./node_modules/azure/node_modules/winston/node_modules/stack-trace/package.json ./node_modules/azure/node_modules/winston/package.json ./node_modules/azure/node_modules/xml2js/lib/xml2js.js ./node_modules/azure/node_modules/xml2js/LICENSE ./node_modules/azure/node_modules/xml2js/package.json ./node_modules/azure/node_modules/xmlbuilder/lib/index.js ./node_modules/azure/node_modules/xmlbuilder/lib/XMLBuilder.js ./node_modules/azure/node_modules/xmlbuilder/lib/XMLFragment.js ./node_modules/azure/node_modules/xmlbuilder/package.json ./node_modules/azure/package.json ./node_modules/dpush/lib/dpush.js ./node_modules/dpush/LICENSE.txt ./node_modules/dpush/package.json ./node_modules/express/.npmignore ./node_modules/express/.travis.yml ./node_modules/express/bin/express ./node_modules/express/History.md ./node_modules/express/index.js ./node_modules/express/lib/application.js ./node_modules/express/lib/express.js ./node_modules/express/lib/middleware.js ./node_modules/express/lib/request.js ./node_modules/express/lib/response.js ./node_modules/express/lib/router/index.js ./node_modules/express/lib/router/route.js ./node_modules/express/lib/utils.js ./node_modules/express/lib/view.js ./node_modules/express/LICENSE ./node_modules/express/Makefile ./node_modules/express/node_modules/buffer-crc32/.npmignore ./node_modules/express/node_modules/buffer-crc32/.travis.yml ./node_modules/express/node_modules/buffer-crc32/index.js ./node_modules/express/node_modules/buffer-crc32/package.json ./node_modules/express/node_modules/buffer-crc32/README.md ./node_modules/express/node_modules/buffer-crc32/tests/crc.test.js ./node_modules/express/node_modules/commander/.npmignore ./node_modules/express/node_modules/commander/.travis.yml ./node_modules/express/node_modules/commander/History.md ./node_modules/express/node_modules/commander/index.js ./node_modules/express/node_modules/commander/lib/commander.js ./node_modules/express/node_modules/commander/Makefile ./node_modules/express/node_modules/commander/package.json ./node_modules/express/node_modules/commander/Readme.md ./node_modules/express/node_modules/connect/.npmignore ./node_modules/express/node_modules/connect/.travis.yml ./node_modules/express/node_modules/connect/index.js ./node_modules/express/node_modules/connect/lib/cache.js ./node_modules/express/node_modules/connect/lib/connect.js ./node_modules/express/node_modules/connect/lib/index.js ./node_modules/express/node_modules/connect/lib/middleware/basicAuth.js ./node_modules/express/node_modules/connect/lib/middleware/bodyParser.js ./node_modules/express/node_modules/connect/lib/middleware/compress.js ./node_modules/express/node_modules/connect/lib/middleware/cookieParser.js ./node_modules/express/node_modules/connect/lib/middleware/cookieSession.js ./node_modules/express/node_modules/connect/lib/middleware/csrf.js ./node_modules/express/node_modules/connect/lib/middleware/directory.js ./node_modules/express/node_modules/connect/lib/middleware/errorHandler.js ./node_modules/express/node_modules/connect/lib/middleware/favicon.js ./node_modules/express/node_modules/connect/lib/middleware/json.js ./node_modules/express/node_modules/connect/lib/middleware/limit.js ./node_modules/express/node_modules/connect/lib/middleware/logger.js ./node_modules/express/node_modules/connect/lib/middleware/methodOverride.js ./node_modules/express/node_modules/connect/lib/middleware/multipart.js ./node_modules/express/node_modules/connect/lib/middleware/query.js ./node_modules/express/node_modules/connect/lib/middleware/responseTime.js ./node_modules/express/node_modules/connect/lib/middleware/session/cookie.js ./node_modules/express/node_modules/connect/lib/middleware/session/memory.js ./node_modules/express/node_modules/connect/lib/middleware/session/session.js ./node_modules/express/node_modules/connect/lib/middleware/session/store.js ./node_modules/express/node_modules/connect/lib/middleware/session.js ./node_modules/express/node_modules/connect/lib/middleware/static.js ./node_modules/express/node_modules/connect/lib/middleware/staticCache.js ./node_modules/express/node_modules/connect/lib/middleware/timeout.js ./node_modules/express/node_modules/connect/lib/middleware/urlencoded.js ./node_modules/express/node_modules/connect/lib/middleware/vhost.js ./node_modules/express/node_modules/connect/lib/patch.js ./node_modules/express/node_modules/connect/lib/proto.js ./node_modules/express/node_modules/connect/lib/public/directory.html ./node_modules/express/node_modules/connect/lib/public/error.html ./node_modules/express/node_modules/connect/lib/public/favicon.ico ./node_modules/express/node_modules/connect/lib/public/icons/page.png ./node_modules/express/node_modules/connect/lib/public/icons/page_add.png ./node_modules/express/node_modules/connect/lib/public/icons/page_attach.png ./node_modules/express/node_modules/connect/lib/public/icons/page_code.png ./node_modules/express/node_modules/connect/lib/public/icons/page_copy.png ./node_modules/express/node_modules/connect/lib/public/icons/page_delete.png ./node_modules/express/node_modules/connect/lib/public/icons/page_edit.png ./node_modules/express/node_modules/connect/lib/public/icons/page_error.png ./node_modules/express/node_modules/connect/lib/public/icons/page_excel.png ./node_modules/express/node_modules/connect/lib/public/icons/page_find.png ./node_modules/express/node_modules/connect/lib/public/icons/page_gear.png ./node_modules/express/node_modules/connect/lib/public/icons/page_go.png ./node_modules/express/node_modules/connect/lib/public/icons/page_green.png ./node_modules/express/node_modules/connect/lib/public/icons/page_key.png ./node_modules/express/node_modules/connect/lib/public/icons/page_lightning.png ./node_modules/express/node_modules/connect/lib/public/icons/page_link.png ./node_modules/express/node_modules/connect/lib/public/icons/page_paintbrush.png ./node_modules/express/node_modules/connect/lib/public/icons/page_paste.png ./node_modules/express/node_modules/connect/lib/public/icons/page_red.png ./node_modules/express/node_modules/connect/lib/public/icons/page_refresh.png ./node_modules/express/node_modules/connect/lib/public/icons/page_save.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_acrobat.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_actionscript.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_add.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_c.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_camera.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_cd.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_code.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_code_red.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_coldfusion.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_compressed.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_copy.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_cplusplus.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_csharp.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_cup.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_database.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_delete.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_dvd.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_edit.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_error.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_excel.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_find.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_flash.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_freehand.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_gear.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_get.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_go.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_h.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_horizontal.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_key.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_lightning.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_link.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_magnify.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_medal.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_office.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_paint.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_paintbrush.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_paste.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_php.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_picture.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_powerpoint.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_put.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_ruby.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_stack.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_star.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_swoosh.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_text.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_text_width.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_tux.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_vector.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_visualstudio.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_width.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_word.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_world.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_wrench.png ./node_modules/express/node_modules/connect/lib/public/icons/page_white_zip.png ./node_modules/express/node_modules/connect/lib/public/icons/page_word.png ./node_modules/express/node_modules/connect/lib/public/icons/page_world.png ./node_modules/express/node_modules/connect/lib/public/style.css ./node_modules/express/node_modules/connect/lib/utils.js ./node_modules/express/node_modules/connect/LICENSE ./node_modules/express/node_modules/connect/node_modules/bytes/.npmignore ./node_modules/express/node_modules/connect/node_modules/bytes/component.json ./node_modules/express/node_modules/connect/node_modules/bytes/History.md ./node_modules/express/node_modules/connect/node_modules/bytes/index.js ./node_modules/express/node_modules/connect/node_modules/bytes/Makefile ./node_modules/express/node_modules/connect/node_modules/bytes/package.json ./node_modules/express/node_modules/connect/node_modules/bytes/Readme.md ./node_modules/express/node_modules/connect/node_modules/formidable/.npmignore ./node_modules/express/node_modules/connect/node_modules/formidable/.travis.yml ./node_modules/express/node_modules/connect/node_modules/formidable/benchmark/bench-multipart-parser.js ./node_modules/express/node_modules/connect/node_modules/formidable/example/json.js ./node_modules/express/node_modules/connect/node_modules/formidable/example/post.js ./node_modules/express/node_modules/connect/node_modules/formidable/example/upload.js ./node_modules/express/node_modules/connect/node_modules/formidable/index.js ./node_modules/express/node_modules/connect/node_modules/formidable/lib/file.js ./node_modules/express/node_modules/connect/node_modules/formidable/lib/incoming_form.js ./node_modules/express/node_modules/connect/node_modules/formidable/lib/index.js ./node_modules/express/node_modules/connect/node_modules/formidable/lib/json_parser.js ./node_modules/express/node_modules/connect/node_modules/formidable/lib/multipart_parser.js ./node_modules/express/node_modules/connect/node_modules/formidable/lib/octet_parser.js ./node_modules/express/node_modules/connect/node_modules/formidable/lib/querystring_parser.js ./node_modules/express/node_modules/connect/node_modules/formidable/LICENSE ./node_modules/express/node_modules/connect/node_modules/formidable/package.json ./node_modules/express/node_modules/connect/node_modules/formidable/Readme.md ./node_modules/express/node_modules/connect/node_modules/formidable/test/common.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/fixture/file/beta-sticker-1.png ./node_modules/express/node_modules/connect/node_modules/formidable/test/fixture/file/binaryfile.tar.gz ./node_modules/express/node_modules/connect/node_modules/formidable/test/fixture/file/blank.gif ./node_modules/express/node_modules/connect/node_modules/formidable/test/fixture/file/funkyfilename.txt ./node_modules/express/node_modules/connect/node_modules/formidable/test/fixture/file/menu_separator.png ./node_modules/express/node_modules/connect/node_modules/formidable/test/fixture/file/plain.txt ./node_modules/express/node_modules/connect/node_modules/formidable/test/fixture/http/special-chars-in-filename/info.md ./node_modules/express/node_modules/connect/node_modules/formidable/test/fixture/js/encoding.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/fixture/js/misc.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/fixture/js/no-filename.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/fixture/js/preamble.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/fixture/js/special-chars-in-filename.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/fixture/js/workarounds.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/fixture/multipart.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/integration/test-fixtures.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/integration/test-json.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/integration/test-octet-stream.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/legacy/common.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/legacy/integration/test-multipart-parser.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/legacy/simple/test-file.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/legacy/simple/test-incoming-form.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/legacy/simple/test-multipart-parser.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/legacy/simple/test-querystring-parser.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/legacy/system/test-multi-video-upload.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/run.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/standalone/test-connection-aborted.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/standalone/test-content-transfer-encoding.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/standalone/test-issue-46.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/tools/base64.html ./node_modules/express/node_modules/connect/node_modules/formidable/test/unit/test-file.js ./node_modules/express/node_modules/connect/node_modules/formidable/test/unit/test-incoming-form.js ./node_modules/express/node_modules/connect/node_modules/formidable/tool/record.js ./node_modules/express/node_modules/connect/node_modules/pause/.npmignore ./node_modules/express/node_modules/connect/node_modules/pause/History.md ./node_modules/express/node_modules/connect/node_modules/pause/index.js ./node_modules/express/node_modules/connect/node_modules/pause/Makefile ./node_modules/express/node_modules/connect/node_modules/pause/package.json ./node_modules/express/node_modules/connect/node_modules/pause/Readme.md ./node_modules/express/node_modules/connect/node_modules/qs/.gitmodules ./node_modules/express/node_modules/connect/node_modules/qs/.npmignore ./node_modules/express/node_modules/connect/node_modules/qs/index.js ./node_modules/express/node_modules/connect/node_modules/qs/package.json ./node_modules/express/node_modules/connect/node_modules/qs/Readme.md ./node_modules/express/node_modules/connect/package.json ./node_modules/express/node_modules/connect/test.js ./node_modules/express/node_modules/cookie/.npmignore ./node_modules/express/node_modules/cookie/.travis.yml ./node_modules/express/node_modules/cookie/index.js ./node_modules/express/node_modules/cookie/package.json ./node_modules/express/node_modules/cookie/README.md ./node_modules/express/node_modules/cookie/test/mocha.opts ./node_modules/express/node_modules/cookie/test/parse.js ./node_modules/express/node_modules/cookie/test/serialize.js ./node_modules/express/node_modules/cookie-signature/.npmignore ./node_modules/express/node_modules/cookie-signature/History.md ./node_modules/express/node_modules/cookie-signature/index.js ./node_modules/express/node_modules/cookie-signature/Makefile ./node_modules/express/node_modules/cookie-signature/package.json ./node_modules/express/node_modules/cookie-signature/Readme.md ./node_modules/express/node_modules/debug/.npmignore ./node_modules/express/node_modules/debug/component.json ./node_modules/express/node_modules/debug/debug.js ./node_modules/express/node_modules/debug/example/app.js ./node_modules/express/node_modules/debug/example/browser.html ./node_modules/express/node_modules/debug/example/wildcards.js ./node_modules/express/node_modules/debug/example/worker.js ./node_modules/express/node_modules/debug/History.md ./node_modules/express/node_modules/debug/index.js ./node_modules/express/node_modules/debug/lib/debug.js ./node_modules/express/node_modules/debug/package.json ./node_modules/express/node_modules/debug/Readme.md ./node_modules/express/node_modules/fresh/.npmignore ./node_modules/express/node_modules/fresh/index.js ./node_modules/express/node_modules/fresh/Makefile ./node_modules/express/node_modules/fresh/package.json ./node_modules/express/node_modules/fresh/Readme.md ./node_modules/express/node_modules/methods/index.js ./node_modules/express/node_modules/methods/package.json ./node_modules/express/node_modules/mkdirp/.npmignore ./node_modules/express/node_modules/mkdirp/.travis.yml ./node_modules/express/node_modules/mkdirp/examples/pow.js ./node_modules/express/node_modules/mkdirp/index.js ./node_modules/express/node_modules/mkdirp/LICENSE ./node_modules/express/node_modules/mkdirp/package.json ./node_modules/express/node_modules/mkdirp/README.markdown ./node_modules/express/node_modules/mkdirp/test/chmod.js ./node_modules/express/node_modules/mkdirp/test/clobber.js ./node_modules/express/node_modules/mkdirp/test/mkdirp.js ./node_modules/express/node_modules/mkdirp/test/perm.js ./node_modules/express/node_modules/mkdirp/test/perm_sync.js ./node_modules/express/node_modules/mkdirp/test/race.js ./node_modules/express/node_modules/mkdirp/test/rel.js ./node_modules/express/node_modules/mkdirp/test/return.js ./node_modules/express/node_modules/mkdirp/test/return_sync.js ./node_modules/express/node_modules/mkdirp/test/root.js ./node_modules/express/node_modules/mkdirp/test/sync.js ./node_modules/express/node_modules/mkdirp/test/umask.js ./node_modules/express/node_modules/mkdirp/test/umask_sync.js ./node_modules/express/node_modules/range-parser/.npmignore ./node_modules/express/node_modules/range-parser/History.md ./node_modules/express/node_modules/range-parser/index.js ./node_modules/express/node_modules/range-parser/Makefile ./node_modules/express/node_modules/range-parser/package.json ./node_modules/express/node_modules/range-parser/Readme.md ./node_modules/express/node_modules/send/.npmignore ./node_modules/express/node_modules/send/History.md ./node_modules/express/node_modules/send/index.js ./node_modules/express/node_modules/send/lib/send.js ./node_modules/express/node_modules/send/lib/utils.js ./node_modules/express/node_modules/send/Makefile ./node_modules/express/node_modules/send/node_modules/mime/LICENSE ./node_modules/express/node_modules/send/node_modules/mime/mime.js ./node_modules/express/node_modules/send/node_modules/mime/package.json ./node_modules/express/node_modules/send/node_modules/mime/README.md ./node_modules/express/node_modules/send/node_modules/mime/test.js ./node_modules/express/node_modules/send/node_modules/mime/types/mime.types ./node_modules/express/node_modules/send/node_modules/mime/types/node.types ./node_modules/express/node_modules/send/package.json ./node_modules/express/node_modules/send/Readme.md ./node_modules/express/package.json ./node_modules/express/Readme.md ./node_modules/mpns/lib/mpns.js ./node_modules/mpns/package.json ./node_modules/oauth/index.js ./node_modules/oauth/lib/oauth.js ./node_modules/oauth/lib/oauth2.js ./node_modules/oauth/lib/sha1.js ./node_modules/oauth/lib/_utils.js ./node_modules/oauth/LICENSE ./node_modules/oauth/package.json ./node_modules/pusher/index.js ./node_modules/pusher/lib/pusher.js ./node_modules/pusher/node_modules/request/aws.js ./node_modules/pusher/node_modules/request/aws2.js ./node_modules/pusher/node_modules/request/forever.js ./node_modules/pusher/node_modules/request/LICENSE ./node_modules/pusher/node_modules/request/main.js ./node_modules/pusher/node_modules/request/mimetypes.js ./node_modules/pusher/node_modules/request/oauth.js ./node_modules/pusher/node_modules/request/package.json ./node_modules/pusher/node_modules/request/tunnel.js ./node_modules/pusher/node_modules/request/uuid.js ./node_modules/pusher/node_modules/request/vendor/cookie/index.js ./node_modules/pusher/node_modules/request/vendor/cookie/jar.js ./node_modules/pusher/package.json ./node_modules/request/forever.js ./node_modules/request/LICENSE ./node_modules/request/main.js ./node_modules/request/mimetypes.js ./node_modules/request/oauth.js ./node_modules/request/package.json ./node_modules/request/uuid.js ./node_modules/request/vendor/cookie/index.js ./node_modules/request/vendor/cookie/jar.js ./node_modules/sax/lib/sax.js ./node_modules/sax/LICENSE ./node_modules/sax/package.json ./node_modules/sendgrid/index.js ./node_modules/sendgrid/lib/email.js ./node_modules/sendgrid/lib/file_handler.js ./node_modules/sendgrid/lib/sendgrid.js ./node_modules/sendgrid/lib/smtpapi_headers.js ./node_modules/sendgrid/lib/validation.js ./node_modules/sendgrid/MIT.LICENSE ./node_modules/sendgrid/node_modules/mime/LICENSE ./node_modules/sendgrid/node_modules/mime/mime.js ./node_modules/sendgrid/node_modules/mime/package.json ./node_modules/sendgrid/node_modules/mime/types/mime.types ./node_modules/sendgrid/node_modules/mime/types/node.types ./node_modules/sendgrid/node_modules/nodemailer/lib/engines/sendmail.js ./node_modules/sendgrid/node_modules/nodemailer/lib/engines/ses.js ./node_modules/sendgrid/node_modules/nodemailer/lib/engines/smtp.js ./node_modules/sendgrid/node_modules/nodemailer/lib/engines/stub.js ./node_modules/sendgrid/node_modules/nodemailer/lib/helpers.js ./node_modules/sendgrid/node_modules/nodemailer/lib/nodemailer.js ./node_modules/sendgrid/node_modules/nodemailer/lib/transport.js ./node_modules/sendgrid/node_modules/nodemailer/lib/wellknown.js ./node_modules/sendgrid/node_modules/nodemailer/lib/xoauth.js ./node_modules/sendgrid/node_modules/nodemailer/LICENSE ./node_modules/sendgrid/node_modules/nodemailer/node_modules/mailcomposer/lib/dkim.js ./node_modules/sendgrid/node_modules/nodemailer/node_modules/mailcomposer/lib/mailcomposer.js ./node_modules/sendgrid/node_modules/nodemailer/node_modules/mailcomposer/lib/punycode.js ./node_modules/sendgrid/node_modules/nodemailer/node_modules/mailcomposer/lib/urlfetch.js ./node_modules/sendgrid/node_modules/nodemailer/node_modules/mailcomposer/LICENSE ./node_modules/sendgrid/node_modules/nodemailer/node_modules/mailcomposer/node_modules/mimelib-noiconv/content-types.js ./node_modules/sendgrid/node_modules/nodemailer/node_modules/mailcomposer/node_modules/mimelib-noiconv/index.js ./node_modules/sendgrid/node_modules/nodemailer/node_modules/mailcomposer/node_modules/mimelib-noiconv/LICENSE ./node_modules/sendgrid/node_modules/nodemailer/node_modules/mailcomposer/node_modules/mimelib-noiconv/mime-functions.js ./node_modules/sendgrid/node_modules/nodemailer/node_modules/mailcomposer/node_modules/mimelib-noiconv/package.json ./node_modules/sendgrid/node_modules/nodemailer/node_modules/mailcomposer/package.json ./node_modules/sendgrid/node_modules/nodemailer/node_modules/simplesmtp/index.js ./node_modules/sendgrid/node_modules/nodemailer/node_modules/simplesmtp/lib/client.js ./node_modules/sendgrid/node_modules/nodemailer/node_modules/simplesmtp/lib/pool.js ./node_modules/sendgrid/node_modules/nodemailer/node_modules/simplesmtp/lib/server.js ./node_modules/sendgrid/node_modules/nodemailer/node_modules/simplesmtp/lib/starttls.js ./node_modules/sendgrid/node_modules/nodemailer/node_modules/simplesmtp/LICENSE ./node_modules/sendgrid/node_modules/nodemailer/node_modules/simplesmtp/node_modules/rai/cert/cert.pem ./node_modules/sendgrid/node_modules/nodemailer/node_modules/simplesmtp/node_modules/rai/cert/key.pem ./node_modules/sendgrid/node_modules/nodemailer/node_modules/simplesmtp/node_modules/rai/lib/mockup.js ./node_modules/sendgrid/node_modules/nodemailer/node_modules/simplesmtp/node_modules/rai/lib/rai.js ./node_modules/sendgrid/node_modules/nodemailer/node_modules/simplesmtp/node_modules/rai/lib/starttls.js ./node_modules/sendgrid/node_modules/nodemailer/node_modules/simplesmtp/node_modules/rai/LICENSE ./node_modules/sendgrid/node_modules/nodemailer/node_modules/simplesmtp/node_modules/rai/package.json ./node_modules/sendgrid/node_modules/nodemailer/node_modules/simplesmtp/package.json ./node_modules/sendgrid/node_modules/nodemailer/package.json ./node_modules/sendgrid/node_modules/step/lib/step.js ./node_modules/sendgrid/node_modules/step/package.json ./node_modules/sendgrid/node_modules/underscore/index.js ./node_modules/sendgrid/node_modules/underscore/LICENSE ./node_modules/sendgrid/node_modules/underscore/package.json ./node_modules/sendgrid/node_modules/underscore/underscore.js ./node_modules/sendgrid/package.json ./node_modules/sqlserver/lib/sql.js ./node_modules/sqlserver/lib/sqlserver.native.js ./node_modules/sqlserver/lib/sqlserver.node ./node_modules/sqlserver/package.json ./node_modules/tripwire/lib/native/windows/x86/tripwire.node ./node_modules/tripwire/lib/tripwire.js ./node_modules/tripwire/LICENSE.txt ./node_modules/tripwire/package.json ./node_modules/underscore/LICENSE ./node_modules/underscore/package.json ./node_modules/underscore/underscore.js ./node_modules/underscore.string/lib/underscore.string.js ./node_modules/underscore.string/package.json ./node_modules/wns/lib/wns.js ./node_modules/wns/LICENSE.txt ./node_modules/wns/package.json ./node_modules/xml2js/lib/xml2js.js ./node_modules/xml2js/LICENSE ./node_modules/xml2js/node_modules/sax/lib/sax.js ./node_modules/xml2js/node_modules/sax/LICENSE ./node_modules/xml2js/node_modules/sax/package.json ./node_modules/xml2js/package.json ./node_modules/xmlbuilder/lib/index.js ./node_modules/xmlbuilder/lib/XMLBuilder.js ./node_modules/xmlbuilder/lib/XMLFragment.js ./node_modules/xmlbuilder/package.json ./runtime/core.js ./runtime/filehelpers.js ./runtime/jsonwebtoken.js ./runtime/logger.js ./runtime/logwriter.js ./runtime/metrics.js ./runtime/query/expressions.js ./runtime/query/expressionvisitor.js ./runtime/query/queryparser.js ./runtime/request/authentication/facebook.js ./runtime/request/authentication/google.js ./runtime/request/authentication/microsoftaccount.js ./runtime/request/authentication/twitter.js ./runtime/request/dataoperation.js ./runtime/request/datapipeline.js ./runtime/request/html/corshelper.js ./runtime/request/html/crossdomainhandler.js ./runtime/request/html/templates/crossdomainbridge.html ./runtime/request/html/templates/loginviaiframe.html ./runtime/request/html/templates/loginviaiframereceiver.html ./runtime/request/html/templates/loginviapostmessage.html ./runtime/request/html/templating.js ./runtime/request/loginhandler.js ./runtime/request/middleware/allowHandler.js ./runtime/request/middleware/authenticate.js ./runtime/request/middleware/authorize.js ./runtime/request/middleware/bodyParser.js ./runtime/request/middleware/errorHandler.js ./runtime/request/middleware/requestLimit.js ./runtime/request/request.js ./runtime/request/requesthandler.js ./runtime/request/schedulerhandler.js ./runtime/request/statushandler.js ./runtime/request/tablehandler.js ./runtime/resources.js ./runtime/script/apibuilder.js ./runtime/script/metadata.js ./runtime/script/push/notify-apns.js ./runtime/script/push/notify-gcm.js ./runtime/script/push/notify-mpns.js ./runtime/script/push/notify-wns.js ./runtime/script/push/notify.js ./runtime/script/scriptcache.js ./runtime/script/scripterror.js ./runtime/script/scriptloader.js ./runtime/script/scriptmanager.js ./runtime/script/scriptstate.js ./runtime/script/sqladapter.js ./runtime/script/table.js ./runtime/server.js ./runtime/statuscodes.js ./runtime/storage/sqlbooleanizer.js ./runtime/storage/sqlformatter.js ./runtime/storage/sqlhelpers.js ./runtime/storage/storage.js ./runtime/Zumo.Node.js ./static/client/MobileServices.Web-1.0.0.js ./static/client/MobileServices.Web-1.0.0.min.js ./static/default.htm ./static/robots.txt ./Web.config

    Read the article

  • SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Op

    - by pinaldave
    TechEd India was one of the largest Technology events in India led by Microsoft. This event was attended by more than 3,000 technology enthusiasts, making it one of the most well-organized events of the year. Though I attempted to attend almost all the technology events here, I have not seen any bigger or better event in Indian subcontinents other than this. There are 21 Technical Tracks at Tech·Ed India 2010 that span more than 745 learning opportunities. I was fortunate enough to be a part of this whole event as a speaker and a delegate, as well. TechEd India Speaker Badge and A Token of Lifetime Hotel Selection I presented three different sessions at TechEd India and was also a part of panel discussion. (The details of the sessions are given at the end of this blog post.) Due to extensive traveling, I stay away from my family occasionally. For this reason, I took my wife – Nupur and daughter Shaivi (8 months old) to the event along with me. We stayed at the same hotel where the event was organized so as to maximize my time bonding with my family and to have more time in networking with technology community, at the same time. The hotel Lalit Ashok is the largest and most luxurious venue one can find in Bangalore, located in the middle of the city. The cost of the hotel was a bit pricey, but looking at all the advantages, I had decided to ask for a booking there. Hotel Lalit Ashok Nupur Dave and Shaivi Dave Arrival Day – DAY 0 – April 11, 2010 I reached the event a day earlier, and that was one wise decision for I was able to relax a bit and go over my presentation for the next day’s course. I am a kind of person who likes to get everything ready ahead of time. I was also able to enjoy a pleasant evening with several Microsoft employees and my family friends. I even checked out the location where I would be doing presentations the next day. I was fortunate enough to meet Bijoy Singhal from Microsoft who helped me out with a few of the logistics issues that occured the day before. I was not aware of the fact that the very next day he was going to be “The Man” of the TechEd 2010 event. Vinod Kumar from Microsoft was really very kind as he talked to me regarding my subsequent session. He gave me some suggestions which were really helpful that I was able to incorporate them during my presentation. Finally, I was able to meet Abhishek Kant from Microsoft; his valuable suggestions and unlimited passion have inspired many people like me to work with the Community. Pradipta from Microsoft was also around, being extremely busy with logistics; however, in those busy times, he did find some good spare time to have a chat with me and the other Community leaders. I also met Harish Ranganathan and Sachin Rathi, both from Microsoft. It was so interesting to listen to both of them talking about SharePoint. I just have no words to express my overwhelmed spirit because of all these passionate young guys - Pradipta,Vinod, Bijoy, Harish, Sachin and Ahishek (of course!). Map of TechEd India 2010 Event Day 1 – April 12, 2010 From morning until night time, today was truly a very busy day for me. I had two presentations and one panel discussion for the day. Needless to say, I had a few meetings to attend as well. The day started with a keynote from S. Somaseger where he announced the launch of Visual Studio 2010. The keynote area was really eye-catching because of the very large, bigger-than- life uniform screen. This was truly one to show. The title music of the keynote was very interesting and it featured Bijoy Singhal as the model. It was interesting to talk to him afterwards, when we laughed at jokes together about his modeling assignment. TechEd India Keynote Opening Featuring Bijoy TechEd India 2010 Keynote – S. Somasegar Time: 11:15pm – 11:45pm Session 1: True Lies of SQL Server – SQL Myth Buster Following the excellent keynote, I had my very first session on the subject of SQL Server Myth Buster. At first, I was a bit nervous as right after the keynote, for this was my very first session and during my presentation I saw lots of Microsoft Product Team members. Well, it really went well and I had a really good discussion with attendees of the session. I felt that a well begin was half-done and my confidence was regained. Right after the session, I met a few of my Community friends and had meaningful discussions with them on many subjects. The abstract of the session is as follows: In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myths and their resolutions as I back them up with some demo. This demo presentation is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet fun session. Pinal Presenting session at TechEd India 2010 Time: 1:00 PM – 2:00 PM Lunch with Somasegar After the session I went to see my daughter, and then I headed right away to the lunch with S. Somasegar – the keynote speaker and senior vice president of the Developer Division at Microsoft. I really thank to Abhishek who made it possible for us. Because of his efforts, all the MVPs had the opportunity to meet such a legendary person and had to talk with them on Microsoft Technology. Though Somasegar is currently holding such a high position in Microsoft, he is very polite and a real gentleman, and how I wish that everybody in industry is like him. Believe me, if you spread love and kindness, then that is what you will receive back. As soon as lunch time was over, I ran to the session hall as my second presentation was about to start. Time: 2:30pm – 3:30pm Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Business Intelligence is a subject which was widely talked about at TechEd. Everybody was interested in this subject, and I did not excuse myself from this great concept as well. I consider myself fortunate as I was presenting on the subject of Master Data Services at TechEd. When I had initially learned this subject, I had a bit of confusion about the usage of this tool. Later on, I decided that I would tackle about how we all developers and DBAs are not able to understand something so simple such as this, and even worst, creating confusion about the technology. During system designing, it is very important to have a reference material or master lookup tables. Well, I talked about the same subject and presented the session keeping that as my center talk. The session went very well and I received lots of interesting questions. I got many compliments for talking about this subject on the real-life scenario. I really thank Rushabh Mehta (CEO, Solid Quality Mentors India) for his supportive suggestions that helped me prepare the slide deck, as well as the subject. Pinal Presenting session at TechEd India 2010 The abstract of the session is as follows: SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in-depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision-making process by allowing you to create, manage and propagate changes from a single master view of your business entities. Also, MDS – Master Data-hub which is a vital component, helps ensure the consistency of reporting across systems and deliver faster and more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Pinal Presenting session at TechEd India 2010 The day was still not over for me. I had ran into several friends but we were not able keep our enthusiasm under control about all the rumors saying that SQL Server 2008 R2 was about to be launched tomorrow in the keynote. I then ran to my third and final technical event for the day- a panel discussion with the top technologies of India. Time: 5:00pm – 6:00pm Panel Discussion: Harness the power of Web – SEO and Technical Blogging As I have delivered two technical sessions by this time, I was a bit tired but  not less enthusiastic when I had to talk about Blog and Technology. We discussed many different topics there. I told them that the most important aspect for any blog is its content. We discussed in depth the issues with plagiarism and how to avoid it. Another topic of discussion was how we technology bloggers can create awareness in the Community about what the right kind of blogging is and what morally and technically wrong acts are. A couple of questions were raised about what type of liberty a person can have in terms of writing blogs. Well, it was generically agreed that a blog is mainly a representation of our ideas and thoughts; it should not be governed by external entities. As long as one is writing what they really want to say, but not providing incorrect information or not practicing plagiarism, a blogger should be allowed to express himself. This panel discussion was supposed to be over in an hour, but the interest of the participants was remarkable and so it was extended for 30 minutes more. Finally, we decided to bring to a close the discussion and agreed that we will continue the topic next year. TechEd India Panel Discussion on Web, Technology and SEO Surprisingly, the day was just beginning after doing all of these. By this time, I have almost met all the MVP who arrived at the event, as well as many Microsoft employees. There were lots of Community folks present, too. I decided that I would go to meet several friends from the Community and continue to communicate with me on SQLAuthority.com. I also met Abhishek Baxi and had a good talk with him regarding Win Mobile and Twitter. He also took a very quick video of me wherein I spoke in my mother’s tongue, Gujarati. It was funny that I talked in Gujarati almost all the day, but when I was talking in the interview I could not find the right Gujarati words to speak. I think we all think in English when we think about Technology, so as to address universality. After meeting them, I headed towards the Speakers’ Dinner. Time: 8:00 PM – onwards Speakers Dinner The Speakers’ dinner was indeed a wonderful opportunity for all the speakers to get together and relax. We talked so many different things, from XBOX to Hindi Movies, and from SQL to Samosas. I just could not express how much fun I had. After a long evening, when I returned tmy room and met Shaivi, I just felt instantly relaxed. Kids are really gifts from God. Today was a really long but exciting day. So many things happened in just one day: Visual Studio Lanch, lunch with Somasegar, 2 technical sessions, 1 panel discussion, community leaders meeting, speakers dinner and, last but not leas,t playing with my child! A perfect day! Day 2 – April 13, 2010 Today started with a bang with the excellent keynote by Kamal Hathi who launched SQL Server 2008 R2 in India and demonstrated the power of PowerPivot to all of us. 101 Million Rows in Excel brought lots of applause from the audience. Kamal Hathi Presenting Keynote at TechEd India 2010 The day was a bit easier one for me. I had no sessions today and no events planned. I had a few meetings planned for the second day of the event. I sat in the speaker’s lounge for half a day and met many people there. I attended nearly 9 different meetings today. The subjects of the meetings were very different. Here is a list of the topics of the Community-related meetings: SQL PASS and its involvement in India and subcontinents How to start community blogging Forums and developing aptitude towards technology Ahmedabad/Gandhinagar User Groups and their developments SharePoint and SQL Business Meeting – a client meeting Business Meeting – a potential performance tuning project Business Meeting – Solid Quality Mentors (SolidQ) And family friends Pinal Dave at TechEd India The day passed by so quickly during this meeting. In the evening, I headed to Partners Expo with friends and checked out few of the booths. I really wanted to talk about some of the products, but due to the freebies there was so much crowd that I finally decided to just take the contact details of the partner. I will now start sending them with my queries and, hopefully, I will have my questions answered. Nupur and Shaivi had also one meeting to attend; it was with our family friend Vijay Raj. Vijay is also a person who loves Technology and loves it more than anybody. I see him growing and learning every day, but still remaining as a ‘human’. I believe that if someone acquires as much knowledge as him, that person will become either a computer or cyborg. Here, Vijay is still a kind gentleman and is able to stay as our close family friend. Shaivi was really happy to play with Uncle Vijay. Pinal Dave and Vijay Raj Renuka Prasad, a Microsoft MVP, impressed me with his passion and knowledge of SQL. Every time he gives me credit for his success, I believe that he is very humble. He has way more certifications than me and has worked many more years with SQL compared to me. He is an excellent photographer as well. Most of the photos in this blog post have been taken by him. I told him if ever he wants to do a part time job, he can do the photography very well. Pinal Dave and Renuka Prasad I also met L Srividya from Microsoft, whom I was looking forward to meet. She is a bundle of knowledge that everyone would surely learn a lot from her. I was able to get a few minutes from her and well, I felt confident. She enlightened me with SQL Server BI concepts, domain management and SQL Server security and few other interesting details. I also had a wonderful time talking about SharePoint with fellow Solid Quality Mentor Joy Rathnayake. He is very passionate about SharePoint but when you talk .NET and SQL with him, he is still overwhelmingly knowledgeable. In fact, while talking to him, I figured out that the recent training he delivered was on SQL Server 2008 R2. I told him a joke that it hurts my ego as he is more popular now in SQL training and consulting than me. I am sure all of you agree that working with good people is a gift from God. I am fortunate enough to work with the best of the best Industry experts. It was a great pleasure to hang out with my Community friends – Ahswin Kini, HimaBindu Vejella, Vasudev G, Suprotim Agrawal, Dhananjay, Vikram Pendse, Mahesh Dhola, Mahesh Mitkari,  Manu Zacharia, Shobhan, Hardik Shah, Ashish Mohta, Manan, Subodh Sohani and Sanjay Shetty (of course!) .  (Please let me know if I have met you at the event and forgot your name to list here). Time: 8:00 PM – onwards Community Leaders Dinner After lots of meetings, I headed towards the Community Leaders dinner meeting and met almost all the folks I met in morning. The discussion was almost the same but the real good thing was that we were enjoying it. The food was really good. Nupur was invited in the event, but Shaivi could not come. When Nupur tried to enter the event, she was stopped as Shaivi did not have the pass to enter the dinner. Nupur expressed that Shaivi is only 8 months old and does not eat outside food as well and could not stay by herself at this age, but the door keeper did not agree and asked that without the entry details Shaivi could not go in, but Nupur could. Nupur called me on phone and asked me to help her out. By the time, I was outside; the organizer of the event reached to the door and happily approved Shaivi to join the party. Once in the party, Shaivi had lots of fun meeting so many people. Shaivi Dave and Abhishek Kant Dean Guida (Infragistics President and CEO) and Pinal Dave (SQLAuthority.com) Day 3 – April 14, 2010 Though, it was last day, I was very much excited today as I was about to present my very favorite session. Query Optimization and Performance Tuning is my domain expertise and I make my leaving by consulting and training the same. Today’s session was on the same subject and as an additional twist, another subject about Spatial Database was presented. I was always intrigued with Spatial Database and I have enjoyed learning about it; however, I have never thought about Spatial Indexing before it was decided that I will do this session. I really thank Solid Quality Mentor Dr. Greg Low for his assistance in helping me prepare the slide deck and also review the content. Furthermore, today was really what I call my ‘learning day’ . So far I had not attended any session in TechEd and I felt a bit down for that. Everybody spends their valuable time & money to learn something new and exciting in TechEd and I had not attended a single session at the moment thinking that it was already last day of the event. I did have a plan for the day and I attended two technical sessions before my session of spatial database. I attended 2 sessions of Vinod Kumar. Vinod is a natural storyteller and there was no doubt that his sessions would be jam-packed. People attended his sessions simply because Vinod is syhe speaker. He did not have a single time disappointed audience; he is truly a good speaker. He knows his stuff very well. I personally do not think that in India he can be compared to anyone for SQL. Time: 12:30pm-1:30pm SQL Server Query Optimization, Execution and Debugging Query Performance I really had a fun time attending this session. Vinod made this session very interactive. The entire audience really got into the presentation and started participating in the event. Vinod was presenting a small problem with Query Tuning, which any developer would have encountered and solved with their help in such a fashion that a developer feels he or she have already resolved it. In one question, I was the only one who was ready to answer and Vinod told me in a light tone that I am now allowed to answer it! The audience really found it very amusing. There was a huge crowd around Vinod after the session. Vinod – A master storyteller! Time: 3:45pm-4:45pm Data Recovery / consistency with CheckDB This session was much heavier than the earlier one, and I must say this is my most favorite session I EVER attended in India. In this TechEd I have only attended two sessions, but in my career, I have attended numerous technical sessions not only in India, but all over the world. This session had taken my breath away. One by one, Vinod took the different databases, and started to corrupt them in different ways. Each database has some unique ways to get corrupted. Once that was done, Vinod started to show the DBCC CEHCKDB and demonstrated how it can solve your problem. He finally fixed all the databases with this single tool. I do have a good knowledge of this subject, but let me honestly admit that I have learned a lot from this session. I enjoyed and cheered during this session along with other attendees. I had total satisfaction that, just like everyone, I took advantage of the event and learned something. I am now TECHnically EDucated. Pinal Dave and Vinod Kumar After two very interactive and informative SQL Sessions from Vinod Kumar, the next turn me presenting on Spatial Database and Indexing. I got once again nervous but Vinod told me to stay natural and do my presentation. Well, once I got a huge stage with a total of four projectors and a large crowd, I felt better. Time: 5:00pm-6:00pm Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Pinal Presenting session at TechEd India 2010 Pinal Presenting session at TechEd India 2010 I kicked off this session with Michael J Swart‘s beautiful spatial image. This session was the last one for the day but, to my surprise, I had more than 200+ attendees. Slowly, the rain was starting outside and I was worried that the hall would not be full; despite this, there was not a single seat available in the first five minutes of the session. Thanks to all of you for attending my presentation. I had demonstrated the map of world (and India) and quickly explained what  Geographic and Geometry data types in Spatial Database are. This session had interesting story of Indexing and Comparison, as well as how different traditional indexes are from spatial indexing. Pinal Presenting session at TechEd India 2010 Due to the heavy rain during this event, the power went off for about 22 minutes (just an accident – nobodies fault). During these minutes, there were no audio, no video and no light. I continued to address the mass of 200+ people without any audio device and PowerPoint. I must thank the audience because not a single person left from the session. They all stayed in their place, some moved closure to listen to me properly. I noticed that the curiosity and eagerness to learn new things was at the peak even though it was the very last session of the TechEd. Everybody wanted get the maximum knowledge out of this whole event. I was touched by the support from audience. They listened and participated in my session even without any kinds of technology (no ppt, no mike, no AC, nothing). During these 22 minutes, I had completed my theory verbally. Pinal Presenting session at TechEd India 2010 After a while, we got the projector back online and we continued with some exciting demos. Many thanks to Microsoft people who worked energetically in background to get the backup power for project up. I had a very interesting demo wherein I overlaid Bangalore and Hyderabad on the India Map and find their aerial distance between them. After finding the aerial distance, we browsed online and found that SQL Server estimates the exact aerial distance between these two cities, as compared to the factual distance. There was a huge applause from the crowd on the subject that SQL Server takes into the count of the curvature of the earth and finds the precise distances based on details. During the process of finding the distance, I demonstrated a few examples of the indexes where I expressed how one can use those indexes to find these distances and how they can improve the performance of similar query. I also demonstrated few examples wherein we were able to see in which data type the Index is most useful. We finished the demos with a few more internal stuff. Pinal Presenting session at TechEd India 2010 Despite all issues, I was mostly satisfied with my presentation. I think it was the best session I have ever presented at any conference. There was no help from Technology for a while, but I still got lots of appreciation at the end. When we ended the session, the applause from the audience was so loud that for a moment, the rain was not audible. I was truly moved by the dedication of the Technology enthusiasts. Pinal Dave After Presenting session at TechEd India 2010 The abstract of the session is as follows: The Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Time: 8:00 PM – onwards Dinner by Sponsors After the lively session during the day, there was another dinner party courtesy of one of the sponsors of TechEd. All the MVPs and several Community leaders were present at the dinner. I would like to express my gratitude to Abhishek Kant for organizing this wonderful event for us. It was a blast and really relaxing in all angles. We all stayed there for a long time and talked about our sweet and unforgettable memories of the event. Pinal Dave and Bijoy Singhal It was really one wonderful event. After writing this much, I say that I have no words to express about how much I enjoyed TechEd. However, it is true that I shared with you only 1% of the total activities I have done at the event. There were so many people I have met, yet were not mentioned here although I wanted to write their names here, too . Anyway, I have learned so many things and up until now, I am not able to get over all the fun I had in this event. Pinal Dave at TechEd India 2010 The Next Days – April 15, 2010 – till today I am still not able to get my mind out of the whole experience I had at TechEd India 2010. It was like a whole Microsoft Family working together to celebrate a happy occasion. TechEd India – Truly An Unforgettable Experience! Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Problem with Google Web Toolkit Maven Plugin

    - by arreche
    I got an error following the PetClinic GWT application in less then 30 minutes Any idea? C:\Users\user\Desktop\petclinic>mvn -e gwt:run + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building petclinic [INFO] task-segment: [gwt:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing gwt:run [INFO] [aspectj:compile {execution: default}] [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 4 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date Downloading: http://repository.springsource.com/maven/bundles/release/org/codeha us/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository com.springsource.repository.bundles.release (http://repository.sp ringsource.com/maven/bundles/release) Downloading: http://repository.springsource.com/maven/bundles/external/org/codeh aus/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository com.springsource.repository.bundles.external (http://repository.s pringsource.com/maven/bundles/external) Downloading: http://repository.springsource.com/maven/bundles/milestone/org/code haus/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository com.springsource.repository.bundles.milestone (http://repository. springsource.com/maven/bundles/milestone) Downloading: http://maven.springframework.org/milestone/org/codehaus/plexus/plex us-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository spring-maven-milestone (http://maven.springframework.org/mileston e) Downloading: http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/gwt/maven/org /codehaus/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository gwt-repo (http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/g wt/maven) Downloading: http://repo1.maven.org/maven2/org/codehaus/plexus/plexus-components /1.1.6/plexus-components-1.1.6.pom [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error building POM (may not be this project's POM). Project ID: org.codehaus.plexus:plexus-components:pom:1.1.6 Reason: Cannot find parent: org.codehaus.plexus:plexus for project: org.codehaus .plexus:plexus-components:pom:1.1.6 for project org.codehaus.plexus:plexus-compo nents:pom:1.1.6 [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Unable to get dependency information: Unable to read the metadata file for artifact 'org.codehaus.plexus :plexus-compiler-api:jar': Cannot find parent: org.codehaus.plexus:plexus for pr oject: org.codehaus.plexus:plexus-components:pom:1.1.6 for project org.codehaus. plexus:plexus-components:pom:1.1.6 org.codehaus.plexus:plexus-compiler-api:jar:1.5.3 from the specified remote repositories: com.springsource.repository.bundles.release (http://repository.springsource.co m/maven/bundles/release), com.springsource.repository.bundles.milestone (http://repository.springsource. com/maven/bundles/milestone), spring-maven-snapshot (http://maven.springframework.org/snapshot), com.springsource.repository.bundles.external (http://repository.springsource.c om/maven/bundles/external), spring-maven-milestone (http://maven.springframework.org/milestone), central (http://repo1.maven.org/maven2), gwt-repo (http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/gwt/maven), codehaus.org (http://snapshots.repository.codehaus.org), JBoss Repo (http://repository.jboss.com/maven2) Path to dependency: 1) org.codehaus.mojo:gwt-maven-plugin:maven-plugin:1.3.1.google at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:711) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.artifact.resolver.ArtifactResolutionException: Unabl e to get dependency information: Unable to read the metadata file for artifact ' org.codehaus.plexus:plexus-compiler-api:jar': Cannot find parent: org.codehaus.p lexus:plexus for project: org.codehaus.plexus:plexus-components:pom:1.1.6 for pr oject org.codehaus.plexus:plexus-components:pom:1.1.6 org.codehaus.plexus:plexus-compiler-api:jar:1.5.3 from the specified remote repositories: com.springsource.repository.bundles.release (http://repository.springsource.co m/maven/bundles/release), com.springsource.repository.bundles.milestone (http://repository.springsource. com/maven/bundles/milestone), spring-maven-snapshot (http://maven.springframework.org/snapshot), com.springsource.repository.bundles.external (http://repository.springsource.c om/maven/bundles/external), spring-maven-milestone (http://maven.springframework.org/milestone), central (http://repo1.maven.org/maven2), gwt-repo (http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/gwt/maven), codehaus.org (http://snapshots.repository.codehaus.org), JBoss Repo (http://repository.jboss.com/maven2) Path to dependency: 1) org.codehaus.mojo:gwt-maven-plugin:maven-plugin:1.3.1.google at org.apache.maven.artifact.resolver.DefaultArtifactCollector.recurse(D efaultArtifactCollector.java:430) at org.apache.maven.artifact.resolver.DefaultArtifactCollector.collect(D efaultArtifactCollector.java:74) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolveTra nsitively(DefaultArtifactResolver.java:316) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolveTra nsitively(DefaultArtifactResolver.java:304) at org.apache.maven.plugin.DefaultPluginManager.ensurePluginContainerIsC omplete(DefaultPluginManager.java:835) at org.apache.maven.plugin.DefaultPluginManager.getConfiguredMojo(Defaul tPluginManager.java:647) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:468) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) ... 17 more Caused by: org.apache.maven.artifact.metadata.ArtifactMetadataRetrievalException : Unable to read the metadata file for artifact 'org.codehaus.plexus:plexus-comp iler-api:jar': Cannot find parent: org.codehaus.plexus:plexus for project: org.c odehaus.plexus:plexus-components:pom:1.1.6 for project org.codehaus.plexus:plexu s-components:pom:1.1.6 at org.apache.maven.project.artifact.MavenMetadataSource.retrieveRelocat edProject(MavenMetadataSource.java:200) at org.apache.maven.project.artifact.MavenMetadataSource.retrieveRelocat edArtifact(MavenMetadataSource.java:94) at org.apache.maven.artifact.resolver.DefaultArtifactCollector.recurse(D efaultArtifactCollector.java:387) ... 24 more Caused by: org.apache.maven.project.ProjectBuildingException: Cannot find parent : org.codehaus.plexus:plexus for project: org.codehaus.plexus:plexus-components: pom:1.1.6 for project org.codehaus.plexus:plexus-components:pom:1.1.6 at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1396) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1407) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1407) at org.apache.maven.project.DefaultMavenProjectBuilder.buildInternal(Def aultMavenProjectBuilder.java:823) at org.apache.maven.project.DefaultMavenProjectBuilder.buildFromReposito ry(DefaultMavenProjectBuilder.java:255) at org.apache.maven.project.artifact.MavenMetadataSource.retrieveRelocat edProject(MavenMetadataSource.java:163) ... 26 more Caused by: org.apache.maven.project.InvalidProjectModelException: Parse error re ading POM. Reason: expected START_TAG or END_TAG not TEXT (position: TEXT seen . ..<role>Developer</role>\n 6878/?\r</... @163:16) for project org.codehaus .plexus:plexus at C:\Users\user\.m2\repository\org\codehaus\plexus\plexus\1.0.8\ plexus-1.0.8.pom at org.apache.maven.project.DefaultMavenProjectBuilder.readModel(Default MavenProjectBuilder.java:1610) at org.apache.maven.project.DefaultMavenProjectBuilder.readModel(Default MavenProjectBuilder.java:1571) at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepo sitory(DefaultMavenProjectBuilder.java:562) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1392) ... 31 more Caused by: org.codehaus.plexus.util.xml.pull.XmlPullParserException: expected ST ART_TAG or END_TAG not TEXT (position: TEXT seen ...<role>Developer</role>\n 6878/?\r</... @163:16) at hidden.org.codehaus.plexus.util.xml.pull.MXParser.nextTag(MXParser.ja va:1095) at org.apache.maven.model.io.xpp3.MavenXpp3Reader.parseDeveloper(MavenXp p3Reader.java:1389) at org.apache.maven.model.io.xpp3.MavenXpp3Reader.parseModel(MavenXpp3Re ader.java:1944) at org.apache.maven.model.io.xpp3.MavenXpp3Reader.read(MavenXpp3Reader.j ava:3912) at org.apache.maven.project.DefaultMavenProjectBuilder.readModel(Default MavenProjectBuilder.java:1606) ... 34 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11 seconds [INFO] Finished at: Fri May 21 20:28:23 BST 2010 [INFO] Final Memory: 45M/205M [INFO] ------------------------------------------------------------------------

    Read the article

  • SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Op

    - by pinaldave
    TechEd India was one of the largest Technology events in India led by Microsoft. This event was attended by more than 3,000 technology enthusiasts, making it one of the most well-organized events of the year. Though I attempted to attend almost all the technology events here, I have not seen any bigger or better event in Indian subcontinents other than this. There are 21 Technical Tracks at Tech·Ed India 2010 that span more than 745 learning opportunities. I was fortunate enough to be a part of this whole event as a speaker and a delegate, as well. TechEd India Speaker Badge and A Token of Lifetime Hotel Selection I presented three different sessions at TechEd India and was also a part of panel discussion. (The details of the sessions are given at the end of this blog post.) Due to extensive traveling, I stay away from my family occasionally. For this reason, I took my wife – Nupur and daughter Shaivi (8 months old) to the event along with me. We stayed at the same hotel where the event was organized so as to maximize my time bonding with my family and to have more time in networking with technology community, at the same time. The hotel Lalit Ashok is the largest and most luxurious venue one can find in Bangalore, located in the middle of the city. The cost of the hotel was a bit pricey, but looking at all the advantages, I had decided to ask for a booking there. Hotel Lalit Ashok Nupur Dave and Shaivi Dave Arrival Day – DAY 0 – April 11, 2010 I reached the event a day earlier, and that was one wise decision for I was able to relax a bit and go over my presentation for the next day’s course. I am a kind of person who likes to get everything ready ahead of time. I was also able to enjoy a pleasant evening with several Microsoft employees and my family friends. I even checked out the location where I would be doing presentations the next day. I was fortunate enough to meet Bijoy Singhal from Microsoft who helped me out with a few of the logistics issues that occured the day before. I was not aware of the fact that the very next day he was going to be “The Man” of the TechEd 2010 event. Vinod Kumar from Microsoft was really very kind as he talked to me regarding my subsequent session. He gave me some suggestions which were really helpful that I was able to incorporate them during my presentation. Finally, I was able to meet Abhishek Kant from Microsoft; his valuable suggestions and unlimited passion have inspired many people like me to work with the Community. Pradipta from Microsoft was also around, being extremely busy with logistics; however, in those busy times, he did find some good spare time to have a chat with me and the other Community leaders. I also met Harish Ranganathan and Sachin Rathi, both from Microsoft. It was so interesting to listen to both of them talking about SharePoint. I just have no words to express my overwhelmed spirit because of all these passionate young guys - Pradipta,Vinod, Bijoy, Harish, Sachin and Ahishek (of course!). Map of TechEd India 2010 Event Day 1 – April 12, 2010 From morning until night time, today was truly a very busy day for me. I had two presentations and one panel discussion for the day. Needless to say, I had a few meetings to attend as well. The day started with a keynote from S. Somaseger where he announced the launch of Visual Studio 2010. The keynote area was really eye-catching because of the very large, bigger-than- life uniform screen. This was truly one to show. The title music of the keynote was very interesting and it featured Bijoy Singhal as the model. It was interesting to talk to him afterwards, when we laughed at jokes together about his modeling assignment. TechEd India Keynote Opening Featuring Bijoy TechEd India 2010 Keynote – S. Somasegar Time: 11:15pm – 11:45pm Session 1: True Lies of SQL Server – SQL Myth Buster Following the excellent keynote, I had my very first session on the subject of SQL Server Myth Buster. At first, I was a bit nervous as right after the keynote, for this was my very first session and during my presentation I saw lots of Microsoft Product Team members. Well, it really went well and I had a really good discussion with attendees of the session. I felt that a well begin was half-done and my confidence was regained. Right after the session, I met a few of my Community friends and had meaningful discussions with them on many subjects. The abstract of the session is as follows: In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myths and their resolutions as I back them up with some demo. This demo presentation is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet fun session. Pinal Presenting session at TechEd India 2010 Time: 1:00 PM – 2:00 PM Lunch with Somasegar After the session I went to see my daughter, and then I headed right away to the lunch with S. Somasegar – the keynote speaker and senior vice president of the Developer Division at Microsoft. I really thank to Abhishek who made it possible for us. Because of his efforts, all the MVPs had the opportunity to meet such a legendary person and had to talk with them on Microsoft Technology. Though Somasegar is currently holding such a high position in Microsoft, he is very polite and a real gentleman, and how I wish that everybody in industry is like him. Believe me, if you spread love and kindness, then that is what you will receive back. As soon as lunch time was over, I ran to the session hall as my second presentation was about to start. Time: 2:30pm – 3:30pm Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Business Intelligence is a subject which was widely talked about at TechEd. Everybody was interested in this subject, and I did not excuse myself from this great concept as well. I consider myself fortunate as I was presenting on the subject of Master Data Services at TechEd. When I had initially learned this subject, I had a bit of confusion about the usage of this tool. Later on, I decided that I would tackle about how we all developers and DBAs are not able to understand something so simple such as this, and even worst, creating confusion about the technology. During system designing, it is very important to have a reference material or master lookup tables. Well, I talked about the same subject and presented the session keeping that as my center talk. The session went very well and I received lots of interesting questions. I got many compliments for talking about this subject on the real-life scenario. I really thank Rushabh Mehta (CEO, Solid Quality Mentors India) for his supportive suggestions that helped me prepare the slide deck, as well as the subject. Pinal Presenting session at TechEd India 2010 The abstract of the session is as follows: SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in-depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision-making process by allowing you to create, manage and propagate changes from a single master view of your business entities. Also, MDS – Master Data-hub which is a vital component, helps ensure the consistency of reporting across systems and deliver faster and more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Pinal Presenting session at TechEd India 2010 The day was still not over for me. I had ran into several friends but we were not able keep our enthusiasm under control about all the rumors saying that SQL Server 2008 R2 was about to be launched tomorrow in the keynote. I then ran to my third and final technical event for the day- a panel discussion with the top technologies of India. Time: 5:00pm – 6:00pm Panel Discussion: Harness the power of Web – SEO and Technical Blogging As I have delivered two technical sessions by this time, I was a bit tired but  not less enthusiastic when I had to talk about Blog and Technology. We discussed many different topics there. I told them that the most important aspect for any blog is its content. We discussed in depth the issues with plagiarism and how to avoid it. Another topic of discussion was how we technology bloggers can create awareness in the Community about what the right kind of blogging is and what morally and technically wrong acts are. A couple of questions were raised about what type of liberty a person can have in terms of writing blogs. Well, it was generically agreed that a blog is mainly a representation of our ideas and thoughts; it should not be governed by external entities. As long as one is writing what they really want to say, but not providing incorrect information or not practicing plagiarism, a blogger should be allowed to express himself. This panel discussion was supposed to be over in an hour, but the interest of the participants was remarkable and so it was extended for 30 minutes more. Finally, we decided to bring to a close the discussion and agreed that we will continue the topic next year. TechEd India Panel Discussion on Web, Technology and SEO Surprisingly, the day was just beginning after doing all of these. By this time, I have almost met all the MVP who arrived at the event, as well as many Microsoft employees. There were lots of Community folks present, too. I decided that I would go to meet several friends from the Community and continue to communicate with me on SQLAuthority.com. I also met Abhishek Baxi and had a good talk with him regarding Win Mobile and Twitter. He also took a very quick video of me wherein I spoke in my mother’s tongue, Gujarati. It was funny that I talked in Gujarati almost all the day, but when I was talking in the interview I could not find the right Gujarati words to speak. I think we all think in English when we think about Technology, so as to address universality. After meeting them, I headed towards the Speakers’ Dinner. Time: 8:00 PM – onwards Speakers Dinner The Speakers’ dinner was indeed a wonderful opportunity for all the speakers to get together and relax. We talked so many different things, from XBOX to Hindi Movies, and from SQL to Samosas. I just could not express how much fun I had. After a long evening, when I returned tmy room and met Shaivi, I just felt instantly relaxed. Kids are really gifts from God. Today was a really long but exciting day. So many things happened in just one day: Visual Studio Lanch, lunch with Somasegar, 2 technical sessions, 1 panel discussion, community leaders meeting, speakers dinner and, last but not leas,t playing with my child! A perfect day! Day 2 – April 13, 2010 Today started with a bang with the excellent keynote by Kamal Hathi who launched SQL Server 2008 R2 in India and demonstrated the power of PowerPivot to all of us. 101 Million Rows in Excel brought lots of applause from the audience. Kamal Hathi Presenting Keynote at TechEd India 2010 The day was a bit easier one for me. I had no sessions today and no events planned. I had a few meetings planned for the second day of the event. I sat in the speaker’s lounge for half a day and met many people there. I attended nearly 9 different meetings today. The subjects of the meetings were very different. Here is a list of the topics of the Community-related meetings: SQL PASS and its involvement in India and subcontinents How to start community blogging Forums and developing aptitude towards technology Ahmedabad/Gandhinagar User Groups and their developments SharePoint and SQL Business Meeting – a client meeting Business Meeting – a potential performance tuning project Business Meeting – Solid Quality Mentors (SolidQ) And family friends Pinal Dave at TechEd India The day passed by so quickly during this meeting. In the evening, I headed to Partners Expo with friends and checked out few of the booths. I really wanted to talk about some of the products, but due to the freebies there was so much crowd that I finally decided to just take the contact details of the partner. I will now start sending them with my queries and, hopefully, I will have my questions answered. Nupur and Shaivi had also one meeting to attend; it was with our family friend Vijay Raj. Vijay is also a person who loves Technology and loves it more than anybody. I see him growing and learning every day, but still remaining as a ‘human’. I believe that if someone acquires as much knowledge as him, that person will become either a computer or cyborg. Here, Vijay is still a kind gentleman and is able to stay as our close family friend. Shaivi was really happy to play with Uncle Vijay. Pinal Dave and Vijay Raj Renuka Prasad, a Microsoft MVP, impressed me with his passion and knowledge of SQL. Every time he gives me credit for his success, I believe that he is very humble. He has way more certifications than me and has worked many more years with SQL compared to me. He is an excellent photographer as well. Most of the photos in this blog post have been taken by him. I told him if ever he wants to do a part time job, he can do the photography very well. Pinal Dave and Renuka Prasad I also met L Srividya from Microsoft, whom I was looking forward to meet. She is a bundle of knowledge that everyone would surely learn a lot from her. I was able to get a few minutes from her and well, I felt confident. She enlightened me with SQL Server BI concepts, domain management and SQL Server security and few other interesting details. I also had a wonderful time talking about SharePoint with fellow Solid Quality Mentor Joy Rathnayake. He is very passionate about SharePoint but when you talk .NET and SQL with him, he is still overwhelmingly knowledgeable. In fact, while talking to him, I figured out that the recent training he delivered was on SQL Server 2008 R2. I told him a joke that it hurts my ego as he is more popular now in SQL training and consulting than me. I am sure all of you agree that working with good people is a gift from God. I am fortunate enough to work with the best of the best Industry experts. It was a great pleasure to hang out with my Community friends – Ahswin Kini, HimaBindu Vejella, Vasudev G, Suprotim Agrawal, Dhananjay, Vikram Pendse, Mahesh Dhola, Mahesh Mitkari,  Manu Zacharia, Shobhan, Hardik Shah, Ashish Mohta, Manan, Subodh Sohani and Sanjay Shetty (of course!) .  (Please let me know if I have met you at the event and forgot your name to list here). Time: 8:00 PM – onwards Community Leaders Dinner After lots of meetings, I headed towards the Community Leaders dinner meeting and met almost all the folks I met in morning. The discussion was almost the same but the real good thing was that we were enjoying it. The food was really good. Nupur was invited in the event, but Shaivi could not come. When Nupur tried to enter the event, she was stopped as Shaivi did not have the pass to enter the dinner. Nupur expressed that Shaivi is only 8 months old and does not eat outside food as well and could not stay by herself at this age, but the door keeper did not agree and asked that without the entry details Shaivi could not go in, but Nupur could. Nupur called me on phone and asked me to help her out. By the time, I was outside; the organizer of the event reached to the door and happily approved Shaivi to join the party. Once in the party, Shaivi had lots of fun meeting so many people. Shaivi Dave and Abhishek Kant Dean Guida (Infragistics President and CEO) and Pinal Dave (SQLAuthority.com) Day 3 – April 14, 2010 Though, it was last day, I was very much excited today as I was about to present my very favorite session. Query Optimization and Performance Tuning is my domain expertise and I make my leaving by consulting and training the same. Today’s session was on the same subject and as an additional twist, another subject about Spatial Database was presented. I was always intrigued with Spatial Database and I have enjoyed learning about it; however, I have never thought about Spatial Indexing before it was decided that I will do this session. I really thank Solid Quality Mentor Dr. Greg Low for his assistance in helping me prepare the slide deck and also review the content. Furthermore, today was really what I call my ‘learning day’ . So far I had not attended any session in TechEd and I felt a bit down for that. Everybody spends their valuable time & money to learn something new and exciting in TechEd and I had not attended a single session at the moment thinking that it was already last day of the event. I did have a plan for the day and I attended two technical sessions before my session of spatial database. I attended 2 sessions of Vinod Kumar. Vinod is a natural storyteller and there was no doubt that his sessions would be jam-packed. People attended his sessions simply because Vinod is syhe speaker. He did not have a single time disappointed audience; he is truly a good speaker. He knows his stuff very well. I personally do not think that in India he can be compared to anyone for SQL. Time: 12:30pm-1:30pm SQL Server Query Optimization, Execution and Debugging Query Performance I really had a fun time attending this session. Vinod made this session very interactive. The entire audience really got into the presentation and started participating in the event. Vinod was presenting a small problem with Query Tuning, which any developer would have encountered and solved with their help in such a fashion that a developer feels he or she have already resolved it. In one question, I was the only one who was ready to answer and Vinod told me in a light tone that I am now allowed to answer it! The audience really found it very amusing. There was a huge crowd around Vinod after the session. Vinod – A master storyteller! Time: 3:45pm-4:45pm Data Recovery / consistency with CheckDB This session was much heavier than the earlier one, and I must say this is my most favorite session I EVER attended in India. In this TechEd I have only attended two sessions, but in my career, I have attended numerous technical sessions not only in India, but all over the world. This session had taken my breath away. One by one, Vinod took the different databases, and started to corrupt them in different ways. Each database has some unique ways to get corrupted. Once that was done, Vinod started to show the DBCC CEHCKDB and demonstrated how it can solve your problem. He finally fixed all the databases with this single tool. I do have a good knowledge of this subject, but let me honestly admit that I have learned a lot from this session. I enjoyed and cheered during this session along with other attendees. I had total satisfaction that, just like everyone, I took advantage of the event and learned something. I am now TECHnically EDucated. Pinal Dave and Vinod Kumar After two very interactive and informative SQL Sessions from Vinod Kumar, the next turn me presenting on Spatial Database and Indexing. I got once again nervous but Vinod told me to stay natural and do my presentation. Well, once I got a huge stage with a total of four projectors and a large crowd, I felt better. Time: 5:00pm-6:00pm Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Pinal Presenting session at TechEd India 2010 Pinal Presenting session at TechEd India 2010 I kicked off this session with Michael J Swart‘s beautiful spatial image. This session was the last one for the day but, to my surprise, I had more than 200+ attendees. Slowly, the rain was starting outside and I was worried that the hall would not be full; despite this, there was not a single seat available in the first five minutes of the session. Thanks to all of you for attending my presentation. I had demonstrated the map of world (and India) and quickly explained what  Geographic and Geometry data types in Spatial Database are. This session had interesting story of Indexing and Comparison, as well as how different traditional indexes are from spatial indexing. Pinal Presenting session at TechEd India 2010 Due to the heavy rain during this event, the power went off for about 22 minutes (just an accident – nobodies fault). During these minutes, there were no audio, no video and no light. I continued to address the mass of 200+ people without any audio device and PowerPoint. I must thank the audience because not a single person left from the session. They all stayed in their place, some moved closure to listen to me properly. I noticed that the curiosity and eagerness to learn new things was at the peak even though it was the very last session of the TechEd. Everybody wanted get the maximum knowledge out of this whole event. I was touched by the support from audience. They listened and participated in my session even without any kinds of technology (no ppt, no mike, no AC, nothing). During these 22 minutes, I had completed my theory verbally. Pinal Presenting session at TechEd India 2010 After a while, we got the projector back online and we continued with some exciting demos. Many thanks to Microsoft people who worked energetically in background to get the backup power for project up. I had a very interesting demo wherein I overlaid Bangalore and Hyderabad on the India Map and find their aerial distance between them. After finding the aerial distance, we browsed online and found that SQL Server estimates the exact aerial distance between these two cities, as compared to the factual distance. There was a huge applause from the crowd on the subject that SQL Server takes into the count of the curvature of the earth and finds the precise distances based on details. During the process of finding the distance, I demonstrated a few examples of the indexes where I expressed how one can use those indexes to find these distances and how they can improve the performance of similar query. I also demonstrated few examples wherein we were able to see in which data type the Index is most useful. We finished the demos with a few more internal stuff. Pinal Presenting session at TechEd India 2010 Despite all issues, I was mostly satisfied with my presentation. I think it was the best session I have ever presented at any conference. There was no help from Technology for a while, but I still got lots of appreciation at the end. When we ended the session, the applause from the audience was so loud that for a moment, the rain was not audible. I was truly moved by the dedication of the Technology enthusiasts. Pinal Dave After Presenting session at TechEd India 2010 The abstract of the session is as follows: The Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Time: 8:00 PM – onwards Dinner by Sponsors After the lively session during the day, there was another dinner party courtesy of one of the sponsors of TechEd. All the MVPs and several Community leaders were present at the dinner. I would like to express my gratitude to Abhishek Kant for organizing this wonderful event for us. It was a blast and really relaxing in all angles. We all stayed there for a long time and talked about our sweet and unforgettable memories of the event. Pinal Dave and Bijoy Singhal It was really one wonderful event. After writing this much, I say that I have no words to express about how much I enjoyed TechEd. However, it is true that I shared with you only 1% of the total activities I have done at the event. There were so many people I have met, yet were not mentioned here although I wanted to write their names here, too . Anyway, I have learned so many things and up until now, I am not able to get over all the fun I had in this event. Pinal Dave at TechEd India 2010 The Next Days – April 15, 2010 – till today I am still not able to get my mind out of the whole experience I had at TechEd India 2010. It was like a whole Microsoft Family working together to celebrate a happy occasion. TechEd India – Truly An Unforgettable Experience! Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Ant get task throws "get doesn't support nested resources element" error

    - by David Corley
    The following ant xml should work according to documentation, but does not. Can anyone tell me if I'm doing something wrong. The get task should support the nested "resources" element in Ant 1.7.1 which is the version I'm using: -- <target name="setup"> <tstamp/> <!-- set up work areas --> <!--<taskdef name="ccmutil" classname="com.allfinanz.framework.tools.CCMUtil" classpath="\\Abate\Data\Build_Lib\Ivy\com.allfinanz\ccmutil\1.0\ccmutil-1.0.jar"/>--> <!-- 1st one is special, also sets ${project_wa} --> <!--<ccmutil file="${ant.file}" projects="framework, xpbuw, xpb, bil"/>--> <property name="framework_wa" value="../../../framework"/> <property name="xpbuw_wa" value="../../../xpbuw"/> <property name="xpb_wa" value="../../../xpb"/> <property name="bil_wa" value="../.."/> <!-- Create properties to hold the build values --> <property name="out" value="${user.dir}"/> <!-- This may be overridden from the command line --> <property name="locale" value="us"/> <!-- set contextRoot up as a property - this mean that it can be overwritten from the command line e.g.: ant -DcontextRoot=xpertBridge. --> <property name="contextRoot" value="xpertBridge"/> <property name="build_dir" value="${out}/${release}/build"/> <property name="distrib_dir" value="${out}/${release}/distrib"/> <property name="build.number" value="-1"/> <!-- Download dependencies from repo.fms.allfinanz.com--> <get dest="${lib}"> <resources> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=central&amp;g=soap&amp;a=soap&amp;v=2.3.1&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=JBOSS&amp;g=apache-fileupload&amp;a=commons-fileupload&amp;v=1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=regexp&amp;a=regexp&amp;v=1.1&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=javax.mail&amp;a=mail&amp;v=1.2&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=com.ibm.ws.webservices&amp;a=webservices.thinclient&amp;v=6.1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=avalon-framework&amp;a=avalon-framework&amp;v=4.2.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=jimi&amp;a=jimi&amp;v=1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=batik&amp;a=batik-all&amp;v=1.6&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=bsf&amp;a=bsf&amp;v=2.3.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=rhino&amp;a=js&amp;v=1.5R3&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=central&amp;g=commons-io&amp;a=commons-io&amp;v=1.1&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=central&amp;g=commons-logging&amp;a=commons-logging&amp;v=1.0.4&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=xmlgraphics&amp;a=commons&amp;v=1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=barcode4j&amp;a=barcode4j&amp;v=trunkBIL&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=com.ibm&amp;a=fmcojagt&amp;v=6.1&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=com.allfinanz&amp;a=ejbserversupport&amp;v=1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=com.sun&amp;a=jce&amp;v=1.0&amp;e=zip"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=ssce&amp;a=ssce&amp;v=5.8&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=com.ibm&amp;a=mq&amp;v=5.1&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=com.ibm&amp;a=mqjms&amp;v=5.1&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=NetServerRemote&amp;a=NetServerRemote&amp;v=1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=NetServerRMI&amp;a=NetServerRMI&amp;v=1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=jwsdp&amp;a=saaj-api&amp;v=1.5&amp;e=jar&amp;c=api"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=jwsdp&amp;a=saaj-impl&amp;v=1.5&amp;e=jar&amp;c=impl"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=org.apache.xmlgraphics&amp;a=fop&amp;v=0.92b&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=xerces&amp;a=dom3-xml-apis&amp;v=1.0&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=org.apache&amp;a=derbynet&amp;v=10.0.2&amp;e=jar"/> <url url="http://repo.fms.allfinanz.com/service/local/artifact/maven/redirect?r=thirdparty&amp;g=com.sun&amp;a=jsse&amp;v=1.0&amp;e=jar"/> </resources> </get> </target>

    Read the article

  • Custom ASP.NET Routing to an HttpHandler

    - by Rick Strahl
    As of version 4.0 ASP.NET natively supports routing via the now built-in System.Web.Routing namespace. Routing features are automatically integrated into the HtttpRuntime via a few custom interfaces. New Web Forms Routing Support In ASP.NET 4.0 there are a host of improvements including routing support baked into Web Forms via a RouteData property available on the Page class and RouteCollection.MapPageRoute() route handler that makes it easy to route to Web forms. To map ASP.NET Page routes is as simple as setting up the routes with MapPageRoute:protected void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); } void RegisterRoutes(RouteCollection routes) { routes.MapPageRoute("StockQuote", "StockQuote/{symbol}", "StockQuote.aspx"); routes.MapPageRoute("StockQuotes", "StockQuotes/{symbolList}", "StockQuotes.aspx"); } and then accessing the route data in the page you can then use the new Page class RouteData property to retrieve the dynamic route data information:public partial class StockQuote1 : System.Web.UI.Page { protected StockQuote Quote = null; protected void Page_Load(object sender, EventArgs e) { string symbol = RouteData.Values["symbol"] as string; StockServer server = new StockServer(); Quote = server.GetStockQuote(symbol); // display stock data in Page View } } Simple, quick and doesn’t require much explanation. If you’re using WebForms most of your routing needs should be served just fine by this simple mechanism. Kudos to the ASP.NET team for putting this in the box and making it easy! How Routing Works To handle Routing in ASP.NET involves these steps: Registering Routes Creating a custom RouteHandler to retrieve an HttpHandler Attaching RouteData to your HttpHandler Picking up Route Information in your Request code Registering routes makes ASP.NET aware of the Routes you want to handle via the static RouteTable.Routes collection. You basically add routes to this collection to let ASP.NET know which URL patterns it should watch for. You typically hook up routes off a RegisterRoutes method that fires in Application_Start as I did in the example above to ensure routes are added only once when the application first starts up. When you create a route, you pass in a RouteHandler instance which ASP.NET caches and reuses as routes are matched. Once registered ASP.NET monitors the routes and if a match is found just prior to the HttpHandler instantiation, ASP.NET uses the RouteHandler registered for the route and calls GetHandler() on it to retrieve an HttpHandler instance. The RouteHandler.GetHandler() method is responsible for creating an instance of an HttpHandler that is to handle the request and – if necessary – to assign any additional custom data to the handler. At minimum you probably want to pass the RouteData to the handler so the handler can identify the request based on the route data available. To do this you typically add  a RouteData property to your handler and then assign the property from the RouteHandlers request context. This is essentially how Page.RouteData comes into being and this approach should work well for any custom handler implementation that requires RouteData. It’s a shame that ASP.NET doesn’t have a top level intrinsic object that’s accessible off the HttpContext object to provide route data more generically, but since RouteData is directly tied to HttpHandlers and not all handlers support it it might cause some confusion of when it’s actually available. Bottom line is that if you want to hold on to RouteData you have to assign it to a custom property of the handler or else pass it to the handler via Context.Items[] object that can be retrieved on an as needed basis. It’s important to understand that routing is hooked up via RouteHandlers that are responsible for loading HttpHandler instances. RouteHandlers are invoked for every request that matches a route and through this RouteHandler instance the Handler gains access to the current RouteData. Because of this logic it’s important to understand that Routing is really tied to HttpHandlers and not available prior to handler instantiation, which is pretty late in the HttpRuntime’s request pipeline. IOW, Routing works with Handlers but not with earlier in the pipeline within Modules. Specifically ASP.NET calls RouteHandler.GetHandler() from the PostResolveRequestCache HttpRuntime pipeline event. Here’s the call stack at the beginning of the GetHandler() call: which fires just before handler resolution. Non-Page Routing – You need to build custom RouteHandlers If you need to route to a custom Http Handler or other non-Page (and non-MVC) endpoint in the HttpRuntime, there is no generic mapping support available. You need to create a custom RouteHandler that can manage creating an instance of an HttpHandler that is fired in response to a routed request. Depending on what you are doing this process can be simple or fairly involved as your code is responsible based on the route data provided which handler to instantiate, and more importantly how to pass the route data on to the Handler. Luckily creating a RouteHandler is easy by implementing the IRouteHandler interface which has only a single GetHttpHandler(RequestContext context) method. In this method you can pick up the requestContext.RouteData, instantiate the HttpHandler of choice, and assign the RouteData to it. Then pass back the handler and you’re done.Here’s a simple example of GetHttpHandler() method that dynamically creates a handler based on a passed in Handler type./// <summary> /// Retrieves an Http Handler based on the type specified in the constructor /// </summary> /// <param name="requestContext"></param> /// <returns></returns> IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; // If we're dealing with a Callback Handler // pass the RouteData for this route to the Handler if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; } Note that this code checks for a specific type of handler and if it matches assigns the RouteData to this handler. This is optional but quite a common scenario if you want to work with RouteData. If the handler you need to instantiate isn’t under your control but you still need to pass RouteData to Handler code, an alternative is to pass the RouteData via the HttpContext.Items collection:IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; requestContext.HttpContext.Items["RouteData"] = requestContext.RouteData; return handler; } The code in the handler implementation can then pick up the RouteData from the context collection as needed:RouteData routeData = HttpContext.Current.Items["RouteData"] as RouteData This isn’t as clean as having an explicit RouteData property, but it does have the advantage that the route data is visible anywhere in the Handler’s code chain. It’s definitely preferable to create a custom property on your handler, but the Context work-around works in a pinch when you don’t’ own the handler code and have dynamic code executing as part of the handler execution. An Example of a Custom RouteHandler: Attribute Based Route Implementation In this post I’m going to discuss a custom routine implementation I built for my CallbackHandler class in the West Wind Web & Ajax Toolkit. CallbackHandler can be very easily used for creating AJAX, REST and POX requests following RPC style method mapping. You can pass parameters via URL query string, POST data or raw data structures, and you can retrieve results as JSON, XML or raw string/binary data. It’s a quick and easy way to build service interfaces with no fuss. As a quick review here’s how CallbackHandler works: You create an Http Handler that derives from CallbackHandler You implement methods that have a [CallbackMethod] Attribute and that’s it. Here’s an example of an CallbackHandler implementation in an ashx.cs based handler:// RestService.ashx.cs public class RestService : CallbackHandler { [CallbackMethod] public StockQuote GetStockQuote(string symbol) { StockServer server = new StockServer(); return server.GetStockQuote(symbol); } [CallbackMethod] public StockQuote[] GetStockQuotes(string symbolList) { StockServer server = new StockServer(); string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries); return server.GetStockQuotes(symbols); } } CallbackHandler makes it super easy to create a method on the server, pass data to it via POST, QueryString or raw JSON/XML data, and then retrieve the results easily back in various formats. This works wonderful and I’ve used these tools in many projects for myself and with clients. But one thing missing has been the ability to create clean URLs. Typical URLs looked like this: http://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuote&symbol=msfthttp://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuotes&symbolList=msft,intc,gld,slw,mwe&format=xml which works and is clear enough, but also clearly very ugly. It would be much nicer if URLs could look like this: http://www.west-wind.com//WestwindWebtoolkit/Samples/StockQuote/msfthttp://www.west-wind.com/WestwindWebtoolkit/Samples/StockQuotes/msft,intc,gld,slw?format=xml (the Virtual Root in this sample is WestWindWebToolkit/Samples and StockQuote/{symbol} is the route)(If you use FireFox try using the JSONView plug-in make it easier to view JSON content) So, taking a clue from the WCF REST tools that use RouteUrls I set out to create a way to specify RouteUrls for each of the endpoints. The change made basically allows changing the above to: [CallbackMethod(RouteUrl="RestService/StockQuote/{symbol}")] public StockQuote GetStockQuote(string symbol) { StockServer server = new StockServer(); return server.GetStockQuote(symbol); } [CallbackMethod(RouteUrl = "RestService/StockQuotes/{symbolList}")] public StockQuote[] GetStockQuotes(string symbolList) { StockServer server = new StockServer(); string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries); return server.GetStockQuotes(symbols); } where a RouteUrl is specified as part of the Callback attribute. And with the changes made with RouteUrls I can now get URLs like the second set shown earlier. So how does that work? Let’s find out… How to Create Custom Routes As mentioned earlier Routing is made up of several steps: Creating a custom RouteHandler to create HttpHandler instances Mapping the actual Routes to the RouteHandler Retrieving the RouteData and actually doing something useful with it in the HttpHandler In the CallbackHandler routing example above this works out to something like this: Create a custom RouteHandler that includes a property to track the method to call Set up the routes using Reflection against the class Looking for any RouteUrls in the CallbackMethod attribute Add a RouteData property to the CallbackHandler so we can access the RouteData in the code of the handler Creating a Custom Route Handler To make the above work I created a custom RouteHandler class that includes the actual IRouteHandler implementation as well as a generic and static method to automatically register all routes marked with the [CallbackMethod(RouteUrl="…")] attribute. Here’s the code:/// <summary> /// Route handler that can create instances of CallbackHandler derived /// callback classes. The route handler tracks the method name and /// creates an instance of the service in a predictable manner /// </summary> /// <typeparam name="TCallbackHandler">CallbackHandler type</typeparam> public class CallbackHandlerRouteHandler : IRouteHandler { /// <summary> /// Method name that is to be called on this route. /// Set by the automatically generated RegisterRoutes /// invokation. /// </summary> public string MethodName { get; set; } /// <summary> /// The type of the handler we're going to instantiate. /// Needed so we can semi-generically instantiate the /// handler and call the method on it. /// </summary> public Type CallbackHandlerType { get; set; } /// <summary> /// Constructor to pass in the two required components we /// need to create an instance of our handler. /// </summary> /// <param name="methodName"></param> /// <param name="callbackHandlerType"></param> public CallbackHandlerRouteHandler(string methodName, Type callbackHandlerType) { MethodName = methodName; CallbackHandlerType = callbackHandlerType; } /// <summary> /// Retrieves an Http Handler based on the type specified in the constructor /// </summary> /// <param name="requestContext"></param> /// <returns></returns> IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; // If we're dealing with a Callback Handler // pass the RouteData for this route to the Handler if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; } /// <summary> /// Generic method to register all routes from a CallbackHandler /// that have RouteUrls defined on the [CallbackMethod] attribute /// </summary> /// <typeparam name="TCallbackHandler">CallbackHandler Type</typeparam> /// <param name="routes"></param> public static void RegisterRoutes<TCallbackHandler>(RouteCollection routes) { // find all methods var methods = typeof(TCallbackHandler).GetMethods(BindingFlags.Instance | BindingFlags.Public); foreach (var method in methods) { var attrs = method.GetCustomAttributes(typeof(CallbackMethodAttribute), false); if (attrs.Length < 1) continue; CallbackMethodAttribute attr = attrs[0] as CallbackMethodAttribute; if (string.IsNullOrEmpty(attr.RouteUrl)) continue; // Add the route routes.Add(method.Name, new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler)))); } } } The RouteHandler implements IRouteHandler, and its responsibility via the GetHandler method is to create an HttpHandler based on the route data. When ASP.NET calls GetHandler it passes a requestContext parameter which includes a requestContext.RouteData property. This parameter holds the current request’s route data as well as an instance of the current RouteHandler. If you look at GetHttpHandler() you can see that the code creates an instance of the handler we are interested in and then sets the RouteData property on the handler. This is how you can pass the current request’s RouteData to the handler. The RouteData object also has a  RouteData.RouteHandler property that is also available to the Handler later, which is useful in order to get additional information about the current route. In our case here the RouteHandler includes a MethodName property that identifies the method to execute in the handler since that value no longer comes from the URL so we need to figure out the method name some other way. The method name is mapped explicitly when the RouteHandler is created and here the static method that auto-registers all CallbackMethods with RouteUrls sets the method name when it creates the routes while reflecting over the methods (more on this in a minute). The important point here is that you can attach additional properties to the RouteHandler and you can then later access the RouteHandler and its properties later in the Handler to pick up these custom values. This is a crucial feature in that the RouteHandler serves in passing additional context to the handler so it knows what actions to perform. The automatic route registration is handled by the static RegisterRoutes<TCallbackHandler> method. This method is generic and totally reusable for any CallbackHandler type handler. To register a CallbackHandler and any RouteUrls it has defined you simple use code like this in Application_Start (or other application startup code):protected void Application_Start(object sender, EventArgs e) { // Register Routes for RestService CallbackHandlerRouteHandler.RegisterRoutes<RestService>(RouteTable.Routes); } If you have multiple CallbackHandler style services you can make multiple calls to RegisterRoutes for each of the service types. RegisterRoutes internally uses reflection to run through all the methods of the Handler, looking for CallbackMethod attributes and whether a RouteUrl is specified. If it is a new instance of a CallbackHandlerRouteHandler is created and the name of the method and the type are set. routes.Add(method.Name,           new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler) )) ); While the routing with CallbackHandlerRouteHandler is set up automatically for all methods that use the RouteUrl attribute, you can also use code to hook up those routes manually and skip using the attribute. The code for this is straightforward and just requires that you manually map each individual route to each method you want a routed: protected void Application_Start(objectsender, EventArgs e){    RegisterRoutes(RouteTable.Routes);}void RegisterRoutes(RouteCollection routes) { routes.Add("StockQuote Route",new Route("StockQuote/{symbol}",                     new CallbackHandlerRouteHandler("GetStockQuote",typeof(RestService) ) ) );     routes.Add("StockQuotes Route",new Route("StockQuotes/{symbolList}",                     new CallbackHandlerRouteHandler("GetStockQuotes",typeof(RestService) ) ) );}I think it’s clearly easier to have CallbackHandlerRouteHandler.RegisterRoutes() do this automatically for you based on RouteUrl attributes, but some people have a real aversion to attaching logic via attributes. Just realize that the option to manually create your routes is available as well. Using the RouteData in the Handler A RouteHandler’s responsibility is to create an HttpHandler and as mentioned earlier, natively IHttpHandler doesn’t have any support for RouteData. In order to utilize RouteData in your handler code you have to pass the RouteData to the handler. In my CallbackHandlerRouteHandler when it creates the HttpHandler instance it creates the instance and then assigns the custom RouteData property on the handler:IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; Again this only works if you actually add a RouteData property to your handler explicitly as I did in my CallbackHandler implementation:/// <summary> /// Optionally store RouteData on this handler /// so we can access it internally /// </summary> public RouteData RouteData {get; set; } and the RouteHandler needs to set it when it creates the handler instance. Once you have the route data in your handler you can access Route Keys and Values and also the RouteHandler. Since my RouteHandler has a custom property for the MethodName to retrieve it from within the handler I can do something like this now to retrieve the MethodName (this example is actually not in the handler but target is an instance pass to the processor): // check for Route Data method name if (target is CallbackHandler) { var routeData = ((CallbackHandler)target).RouteData; if (routeData != null) methodToCall = ((CallbackHandlerRouteHandler)routeData.RouteHandler).MethodName; } When I need to access the dynamic values in the route ( symbol in StockQuote/{symbol}) I can retrieve it easily with the Values collection (RouteData.Values["symbol"]). In my CallbackHandler processing logic I’m basically looking for matching parameter names to Route parameters: // look for parameters in the routeif(routeData != null){    string parmString = routeData.Values[parameter.Name] as string;    adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType);} And with that we’ve come full circle. We’ve created a custom RouteHandler() that passes the RouteData to the handler it creates. We’ve registered our routes to use the RouteHandler, and we’ve utilized the route data in our handler. For completeness sake here’s the routine that executes a method call based on the parameters passed in and one of the options is to retrieve the inbound parameters off RouteData (as well as from POST data or QueryString parameters):internal object ExecuteMethod(string method, object target, string[] parameters, CallbackMethodParameterType paramType, ref CallbackMethodAttribute callbackMethodAttribute) { HttpRequest Request = HttpContext.Current.Request; object Result = null; // Stores parsed parameters (from string JSON or QUeryString Values) object[] adjustedParms = null; Type PageType = target.GetType(); MethodInfo MI = PageType.GetMethod(method, BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic); if (MI == null) throw new InvalidOperationException("Invalid Server Method."); object[] methods = MI.GetCustomAttributes(typeof(CallbackMethodAttribute), false); if (methods.Length < 1) throw new InvalidOperationException("Server method is not accessible due to missing CallbackMethod attribute"); if (callbackMethodAttribute != null) callbackMethodAttribute = methods[0] as CallbackMethodAttribute; ParameterInfo[] parms = MI.GetParameters(); JSONSerializer serializer = new JSONSerializer(); RouteData routeData = null; if (target is CallbackHandler) routeData = ((CallbackHandler)target).RouteData; int parmCounter = 0; adjustedParms = new object[parms.Length]; foreach (ParameterInfo parameter in parms) { // Retrieve parameters out of QueryString or POST buffer if (parameters == null) { // look for parameters in the route if (routeData != null) { string parmString = routeData.Values[parameter.Name] as string; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // GET parameter are parsed as plain string values - no JSON encoding else if (HttpContext.Current.Request.HttpMethod == "GET") { // Look up the parameter by name string parmString = Request.QueryString[parameter.Name]; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // POST parameters are treated as methodParameters that are JSON encoded else if (paramType == CallbackMethodParameterType.Json) //string newVariable = methodParameters.GetValue(parmCounter) as string; adjustedParms[parmCounter] = serializer.Deserialize(Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject( Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); } else if (paramType == CallbackMethodParameterType.Json) adjustedParms[parmCounter] = serializer.Deserialize(parameters[parmCounter], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject(parameters[parmCounter], parameter.ParameterType); parmCounter++; } Result = MI.Invoke(target, adjustedParms); return Result; } The code basically uses Reflection to loop through all the parameters available on the method and tries to assign the parameters from RouteData, QueryString or POST variables. The parameters are converted into their appropriate types and then used to eventually make a Reflection based method call. What’s sweet is that the RouteData retrieval is just another option for dealing with the inbound data in this scenario and it adds exactly two lines of code plus the code to retrieve the MethodName I showed previously – a seriously low impact addition that adds a lot of extra value to this endpoint callback processing implementation. Debugging your Routes If you create a lot of routes it’s easy to run into Route conflicts where multiple routes have the same path and overlap with each other. This can be difficult to debug especially if you are using automatically generated routes like the routes created by CallbackHandlerRouteHandler.RegisterRoutes. Luckily there’s a tool that can help you out with this nicely. Phill Haack created a RouteDebugging tool you can download and add to your project. The easiest way to do this is to grab and add this to your project is to use NuGet (Add Library Package from your Project’s Reference Nodes):   which adds a RouteDebug assembly to your project. Once installed you can easily debug your routes with this simple line of code which needs to be installed at application startup:protected void Application_Start(object sender, EventArgs e) { CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes); // Debug your routes RouteDebug.RouteDebugger.RewriteRoutesForTesting(RouteTable.Routes); } Any routed URL then displays something like this: The screen shows you your current route data and all the routes that are mapped along with a flag that displays which route was actually matched. This is useful – if you have any overlap of routes you will be able to see which routes are triggered – the first one in the sequence wins. This tool has saved my ass on a few occasions – and with NuGet now it’s easy to add it to your project in a few seconds and then remove it when you’re done. Routing Around Custom routing seems slightly complicated on first blush due to its disconnected components of RouteHandler, route registration and mapping of custom handlers. But once you understand the relationship between a RouteHandler, the RouteData and how to pass it to a handler, utilizing of Routing becomes a lot easier as you can easily pass context from the registration to the RouteHandler and through to the HttpHandler. The most important thing to understand when building custom routing solutions is to figure out how to map URLs in such a way that the handler can figure out all the pieces it needs to process the request. This can be via URL routing parameters and as I did in my example by passing additional context information as part of the RouteHandler instance that provides the proper execution context. In my case this ‘context’ was the method name, but it could be an actual static value like an enum identifying an operation or category in an application. Basically user supplied data comes in through the url and static application internal data can be passed via RouteHandler property values. Routing can make your application URLs easier to read by non-techie types regardless of whether you’re building Service type or REST applications, or full on Web interfaces. Routing in ASP.NET 4.0 makes it possible to create just about any extensionless URLs you can dream up and custom RouteHanmdler References Sample ProjectIncludes the sample CallbackHandler service discussed here along with compiled versionsof the Westwind.Web and Westwind.Utilities assemblies.  (requires .NET 4.0/VS 2010) West Wind Web Toolkit includes full implementation of CallbackHandler and the Routing Handler West Wind Web Toolkit Source CodeContains the full source code to the Westwind.Web and Westwind.Utilities assemblies usedin these samples. Includes the source described in the post.(Latest build in the Subversion Repository) CallbackHandler Source(Relevant code to this article tree in Westwind.Web assembly) JSONView FireFoxPluginA simple FireFox Plugin to easily view JSON data natively in FireFox.For IE you can use a registry hack to display JSON as raw text.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  HTTP  

    Read the article

  • I am getting this error on each machine after installing ruby and rails, I created one web site and

    - by Santodsh
    D:\PROJECTS\RubyOnRail\webapp\Welcome>ruby script\server => Booting WEBrick => Rails 2.3.4 application starting on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server [2010-01-31 21:19:34] INFO WEBrick 1.3.1 [2010-01-31 21:19:34] INFO ruby 1.8.6 (2007-09-24) [i386-mswin32] [2010-01-31 21:19:34] INFO WEBrick::HTTPServer#start: pid=6576 port=3000 /!\ FAILSAFE /!\ Sun Jan 31 21:19:38 +0530 2010 Status: 500 Internal Server Error uninitialized constant Encoding c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:443:in `load_missing_constant' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:80:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:92:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/encoding.rb:9:in `f ind' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/database.rb:69:in ` initialize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `new' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `send' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:188:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `loop' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `checkout' c:/ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:183:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:98:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:115:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:9:in `cache' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:28:in `call' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:361:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/params _parser.rb:15:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/sessio n/cookie_store.rb:93:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/failsa fe.rb:26:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchroniz e' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:114:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/reload er.rb:34:in `run' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:108:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/static.rb:31:in `c all' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `each' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/log_tailer.rb:17:i n `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:50:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:104:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:65:in `run' c:/ruby/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:95:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `each' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:23:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:82:in `start' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/server.rb:111 c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_origina l_require' c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3 /!\ FAILSAFE /!\ Sun Jan 31 21:19:39 +0530 2010 Status: 500 Internal Server Error uninitialized constant Encoding c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:443:in `load_missing_constant' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:80:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:92:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/encoding.rb:9:in `f ind' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/database.rb:69:in ` initialize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `new' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `send' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:188:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `loop' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `checkout' c:/ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:183:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:98:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:115:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:9:in `cache' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:28:in `call' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:361:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/params _parser.rb:15:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/sessio n/cookie_store.rb:93:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/failsa fe.rb:26:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchroniz e' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:114:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/reload er.rb:34:in `run' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:108:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/static.rb:31:in `c all' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `each' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/log_tailer.rb:17:i n `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:50:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:104:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:65:in `run' c:/ruby/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:95:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `each' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:23:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:82:in `start' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/server.rb:111 c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_origina l_require' c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Table sorting & pagination with jQuery and Razor in ASP.NET MVC

    - by hajan
    Introduction jQuery enjoys living inside pages which are built on top of ASP.NET MVC Framework. The ASP.NET MVC is a place where things are organized very well and it is quite hard to make them dirty, especially because the pattern enforces you on purity (you can still make it dirty if you want so ;) ). We all know how easy is to build a HTML table with a header row, footer row and table rows showing some data. With ASP.NET MVC we can do this pretty easy, but, the result will be pure HTML table which only shows data, but does not includes sorting, pagination or some other advanced features that we were used to have in the ASP.NET WebForms GridView. Ok, there is the WebGrid MVC Helper, but what if we want to make something from pure table in our own clean style? In one of my recent projects, I’ve been using the jQuery tablesorter and tablesorter.pager plugins that go along. You don’t need to know jQuery to make this work… You need to know little CSS to create nice design for your table, but of course you can use mine from the demo… So, what you will see in this blog is how to attach this plugin to your pure html table and a div for pagination and make your table with advanced sorting and pagination features.   Demo Project Resources The resources I’m using for this demo project are shown in the following solution explorer window print screen: Content/images – folder that contains all the up/down arrow images, pagination buttons etc. You can freely replace them with your own, but keep the names the same if you don’t want to change anything in the CSS we will built later. Content/Site.css – The main css theme, where we will add the theme for our table too Controllers/HomeController.cs – The controller I’m using for this project Models/Person.cs – For this demo, I’m using Person.cs class Scripts – jquery-1.4.4.min.js, jquery.tablesorter.js, jquery.tablesorter.pager.js – required script to make the magic happens Views/Home/Index.cshtml – Index view (razor view engine) the other items are not important for the demo. ASP.NET MVC 1. Model In this demo I use only one Person class which defines Person entity with several properties. You can use your own model, maybe one which will access data from database or any other resource. Person.cs public class Person {     public string Name { get; set; }     public string Surname { get; set; }     public string Email { get; set; }     public int? Phone { get; set; }     public DateTime? DateAdded { get; set; }     public int? Age { get; set; }     public Person(string name, string surname, string email,         int? phone, DateTime? dateadded, int? age)     {         Name = name;         Surname = surname;         Email = email;         Phone = phone;         DateAdded = dateadded;         Age = age;     } } 2. View In our example, we have only one Index.chtml page where Razor View engine is used. Razor view engine is my favorite for ASP.NET MVC because it’s very intuitive, fluid and keeps your code clean. 3. Controller Since this is simple example with one page, we use one HomeController.cs where we have two methods, one of ActionResult type (Index) and another GetPeople() used to create and return list of people. HomeController.cs public class HomeController : Controller {     //     // GET: /Home/     public ActionResult Index()     {         ViewBag.People = GetPeople();         return View();     }     public List<Person> GetPeople()     {         List<Person> listPeople = new List<Person>();                  listPeople.Add(new Person("Hajan", "Selmani", "[email protected]", 070070070,DateTime.Now, 25));                     listPeople.Add(new Person("Straight", "Dean", "[email protected]", 123456789, DateTime.Now.AddDays(-5), 35));         listPeople.Add(new Person("Karsen", "Livia", "[email protected]", 46874651, DateTime.Now.AddDays(-2), 31));         listPeople.Add(new Person("Ringer", "Anne", "[email protected]", null, DateTime.Now, null));         listPeople.Add(new Person("O'Leary", "Michael", "[email protected]", 32424344, DateTime.Now, 44));         listPeople.Add(new Person("Gringlesby", "Anne", "[email protected]", null, DateTime.Now.AddDays(-9), 18));         listPeople.Add(new Person("Locksley", "Stearns", "[email protected]", 2135345, DateTime.Now, null));         listPeople.Add(new Person("DeFrance", "Michel", "[email protected]", 235325352, DateTime.Now.AddDays(-18), null));         listPeople.Add(new Person("White", "Johnson", null, null, DateTime.Now.AddDays(-22), 55));         listPeople.Add(new Person("Panteley", "Sylvia", null, 23233223, DateTime.Now.AddDays(-1), 32));         listPeople.Add(new Person("Blotchet-Halls", "Reginald", null, 323243423, DateTime.Now, 26));         listPeople.Add(new Person("Merr", "South", "[email protected]", 3232442, DateTime.Now.AddDays(-5), 85));         listPeople.Add(new Person("MacFeather", "Stearns", "[email protected]", null, DateTime.Now, null));         return listPeople;     } }   TABLE CSS/HTML DESIGN Now, lets start with the implementation. First of all, lets create the table structure and the main CSS. 1. HTML Structure @{     Layout = null;     } <!DOCTYPE html> <html> <head>     <title>ASP.NET & jQuery</title>     <!-- referencing styles, scripts and writing custom js scripts will go here --> </head> <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th> value </th>                 </tr>             </thead>             <tbody>                 <tr>                     <td>value</td>                 </tr>             </tbody>             <tfoot>                 <tr>                     <th> value </th>                 </tr>             </tfoot>         </table>         <div id="pager">                      </div>     </div> </body> </html> So, this is the main structure you need to create for each of your tables where you want to apply the functionality we will create. Of course the scripts are referenced once ;). As you see, our table has class tablesorter and also we have a div with id pager. In the next steps we will use both these to create the needed functionalities. The complete Index.cshtml coded to get the data from controller and display in the page is: <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </thead>             <tbody>                 @{                     foreach (var p in ViewBag.People)                     {                                 <tr>                         <td>@p.Name</td>                         <td>@p.Surname</td>                         <td>@p.Email</td>                         <td>@p.Phone</td>                         <td>@p.DateAdded</td>                     </tr>                     }                 }             </tbody>             <tfoot>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </tfoot>         </table>         <div id="pager" style="position: none;">             <form>             <img src="@Url.Content("~/Content/images/first.png")" class="first" />             <img src="@Url.Content("~/Content/images/prev.png")" class="prev" />             <input type="text" class="pagedisplay" />             <img src="@Url.Content("~/Content/images/next.png")" class="next" />             <img src="@Url.Content("~/Content/images/last.png")" class="last" />             <select class="pagesize">                 <option selected="selected" value="5">5</option>                 <option value="10">10</option>                 <option value="20">20</option>                 <option value="30">30</option>                 <option value="40">40</option>             </select>             </form>         </div>     </div> </body> So, mainly the structure is the same. I have added @Razor code to create table with data retrieved from the ViewBag.People which has been filled with data in the home controller. 2. CSS Design The CSS code I’ve created is: /* DEMO TABLE */ body {     font-size: 75%;     font-family: Verdana, Tahoma, Arial, "Helvetica Neue", Helvetica, Sans-Serif;     color: #232323;     background-color: #fff; } table { border-spacing:0; border:1px solid gray;} table.tablesorter thead tr .header {     background-image: url(images/bg.png);     background-repeat: no-repeat;     background-position: center right;     cursor: pointer; } table.tablesorter tbody td {     color: #3D3D3D;     padding: 4px;     background-color: #FFF;     vertical-align: top; } table.tablesorter tbody tr.odd td {     background-color:#F0F0F6; } table.tablesorter thead tr .headerSortUp {     background-image: url(images/asc.png); } table.tablesorter thead tr .headerSortDown {     background-image: url(images/desc.png); } table th { width:150px;            border:1px outset gray;            background-color:#3C78B5;            color:White;            cursor:pointer; } table thead th:hover { background-color:Yellow; color:Black;} table td { width:150px; border:1px solid gray;} PAGINATION AND SORTING Now, when everything is ready and we have the data, lets make pagination and sorting functionalities 1. jQuery Scripts referencing <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.pager.js")" type="text/javascript"></script> 2. jQuery Sorting and Pagination script   <script type="text/javascript">     $(function () {         $("table.tablesorter").tablesorter({ widthFixed: true, sortList: [[0, 0]] })         .tablesorterPager({ container: $("#pager"), size: $(".pagesize option:selected").val() });     }); </script> So, with only two lines of code, I’m using both tablesorter and tablesorterPager plugins, giving some options to both these. Options added: tablesorter - widthFixed: true – gives fixed width of the columns tablesorter - sortList[[0,0]] – An array of instructions for per-column sorting and direction in the format: [[columnIndex, sortDirection], ... ] where columnIndex is a zero-based index for your columns left-to-right and sortDirection is 0 for Ascending and 1 for Descending. A valid argument that sorts ascending first by column 1 and then column 2 looks like: [[0,0],[1,0]] (source: http://tablesorter.com/docs/) tablesorterPager – container: $(“#pager”) – tells the pager container, the div with id pager in our case. tablesorterPager – size: the default size of each page, where I get the default value selected, so if you put selected to any other of the options in your select list, you will have this number of rows as default per page for the table too. END RESULTS 1. Table once the page is loaded (default results per page is 5 and is automatically sorted by 1st column as sortList is specified) 2. Sorted by Phone Descending 3. Changed pagination to 10 items per page 4. Sorted by Phone and Name (use SHIFT to sort on multiple columns) 5. Sorted by Date Added 6. Page 3, 5 items per page   ADDITIONAL ENHANCEMENTS We can do additional enhancements to the table. We can make search for each column. I will cover this in one of my next blogs. Stay tuned. DEMO PROJECT You can download demo project source code from HERE.CONCLUSION Once you finish with the demo, run your page and open the source code. You will be amazed of the purity of your code.Working with pagination in client side can be very useful. One of the benefits is performance, but if you have thousands of rows in your tables, you will get opposite result when talking about performance. Hence, sometimes it is nice idea to make pagination on back-end. So, the compromise between both approaches would be best to combine both of them. I use at most up to 500 rows on client-side and once the user reach the last page, we can trigger ajax postback which can get the next 500 rows using server-side pagination of the same data. I would like to recommend the following blog post http://weblogs.asp.net/gunnarpeipman/archive/2010/09/14/returning-paged-results-from-repositories-using-pagedresult-lt-t-gt.aspx, which will help you understand how to return page results from repository. I hope this was helpful post for you. Wait for my next posts ;). Please do let me know your feedback. Best Regards, Hajan

    Read the article

  • CCNet and .Net 4.0

    - by Pino
    Could not load file or assembly 'System.Data' or one of its dependencies. An attempt was made to load a program with an incorrect format We have just upgraded one of our projects to .Net 4.0 and have re-configured our build server (Cruise Control .Net). Anyone help with the above error? I have included more trace below. > debug: > > [copy] Copying 1 file to 'C:\Builds\SilverChip\Working\sc.website\web.config'. > [copy] Copying 1 file to 'C:\Builds\SilverChip\Working\sc.admin\web.config'. > [copy] Copying 1 file to 'C:\Builds\SilverChip\Working\sc.website\bin\Intelligencia.UrlRewriter.dll'. > [copy] Copying 1 file to 'C:\Builds\SilverChip\Working\sc.website\bin\UrlRewritingNet.UrlRewriter.dll'. > [exec] Microsoft (R) Build Engine Version 4.0.30319.1 > [exec] [Microsoft .NET Framework, Version 4.0.30319.1] > [exec] Copyright (C) Microsoft Corporation 2007. All rights > reserved. > [exec] > [exec] Build started 11/05/2010 12:02:16. > [exec] Project "C:\Builds\SilverChip\Working\Silverchip.sln" > on node 1 (default targets). > [exec] ValidateSolutionConfiguration: > [exec] Building solution configuration "Debug|Any CPU". > [exec] Project "C:\Builds\SilverChip\Working\Silverchip.sln" > (1) is building > "C:\Builds\SilverChip\Working\sc.lib\sc.lib.csproj" > (2) on node 1 (default targets). > [exec] CoreCompile: > [exec] Skipping target "CoreCompile" because all output files > are up-to-date with respect to the > input files. > [exec] _CopyAppConfigFile: > [exec] Skipping target "_CopyAppConfigFile" because all > output files are up-to-date with > respect to the input files. > [exec] CopyFilesToOutputDirectory: > [exec] sc.lib -> C:\Builds\SilverChip\Working\sc.lib\bin\Debug\sc.lib.dll > [exec] Done Building Project "C:\Builds\SilverChip\Working\sc.lib\sc.lib.csproj" > (default targets). > [exec] Project "C:\Builds\SilverChip\Working\Silverchip.sln" > (1) is building > "C:\Builds\SilverChip\Working\sc_admin.metaproj" > (3) on node 1 (default targets). > [exec] Build: > [exec] Copying file from "C:\Builds\SilverChip\Working\sc.lib\bin\Debug\sc.lib.dll" > to "sc.admin\\Bin\sc.lib.dll". > [exec] Copying file from "C:\Builds\SilverChip\Working\sc.lib\bin\Debug\SubSonic.Core.dll" > to "sc.admin\\Bin\SubSonic.Core.dll". > [exec] Copying file from "C:\Builds\SilverChip\Working\sc.lib\bin\Debug\sc.lib.pdb" > to "sc.admin\\Bin\sc.lib.pdb". > [exec] C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_compiler.exe > -v /sc.admin -p sc.admin\ -u -f -d PrecompiledWeb\sc.admin\ > [exec] ASPNETCOMPILER : error ASPCONFIG: Could not load file or > assembly 'System.Data' or one of its > dependencies. An attempt was made to > load a program with an incorrect > format. > [C:\Builds\SilverChip\Working\sc_admin.metaproj] > [exec] Done Building Project "C:\Builds\SilverChip\Working\sc_admin.metaproj" > (default targets) -- FAILED. > [exec] Project "C:\Builds\SilverChip\Working\Silverchip.sln" > (1) is building > "C:\Builds\SilverChip\Working\sc_website.metaproj" > (4) on node 1 (default targets). > [exec] Build: > [exec] Copying file from "C:\Builds\SilverChip\Working\sc.lib\bin\Debug\sc.lib.dll" > to "sc.website\\Bin\sc.lib.dll". > [exec] Copying file from "C:\Builds\SilverChip\Working\sc.lib\bin\Debug\SubSonic.Core.dll" > to > "sc.website\\Bin\SubSonic.Core.dll". > [exec] Copying file from "C:\Builds\SilverChip\Working\sc.lib\bin\Debug\sc.lib.pdb" > to "sc.website\\Bin\sc.lib.pdb". > [exec] C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_compiler.exe > -v /sc.website -p sc.website\ -u -f -d PrecompiledWeb\sc.website\ > [exec] ASPNETCOMPILER : error ASPCONFIG: Could not load file or > assembly 'System.Data' or one of its > dependencies. An attempt was made to > load a program with an incorrect > format. > [C:\Builds\SilverChip\Working\sc_website.metaproj] > [exec] Done Building Project "C:\Builds\SilverChip\Working\sc_website.metaproj" > (default targets) -- FAILED. > [exec] Done Building Project "C:\Builds\SilverChip\Working\Silverchip.sln" > (default targets) -- FAILED. > [exec] > [exec] Build FAILED. > [exec] > [exec] "C:\Builds\SilverChip\Working\Silverchip.sln" > (default target) (1) -> > [exec] "C:\Builds\SilverChip\Working\sc_admin.metaproj" > (default target) (3) -> > [exec] (Build target) -> > [exec] ASPNETCOMPILER : error ASPCONFIG: Could not load file > or assembly 'System.Data' or one of > its dependencies. An attempt was made > to load a program with an incorrect > format. > [C:\Builds\SilverChip\Working\sc_admin.metaproj] > [exec] > [exec] > [exec] "C:\Builds\SilverChip\Working\Silverchip.sln" > (default target) (1) -> > [exec] "C:\Builds\SilverChip\Working\sc_website.metaproj" > (default target) (4) -> > [exec] ASPNETCOMPILER : error ASPCONFIG: Could not load file > or assembly 'System.Data' or one of > its dependencies. An attempt was made > to load a program with an incorrect > format. > [C:\Builds\SilverChip\Working\sc_website.metaproj] > [exec] > [exec] 0 Warning(s) > [exec] 2 Error(s) > [exec]

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • What&rsquo;s New in ASP.NET 4.0 Part Two: WebForms and Visual Studio Enhancements

    - by Rick Strahl
    In the last installment I talked about the core changes in the ASP.NET runtime that I’ve been taking advantage of. In this column, I’ll cover the changes to the Web Forms engine and some of the cool improvements in Visual Studio that make Web and general development easier. WebForms The WebForms engine is the area that has received most significant changes in ASP.NET 4.0. Probably the most widely anticipated features are related to managing page client ids and of ViewState on WebForm pages. Take Control of Your ClientIDs Unique ClientID generation in ASP.NET has been one of the most complained about “features” in ASP.NET. Although there’s a very good technical reason for these unique generated ids - they guarantee unique ids for each and every server control on a page - these unique and generated ids often get in the way of client-side JavaScript development and CSS styling as it’s often inconvenient and fragile to work with the long, generated ClientIDs. In ASP.NET 4.0 you can now specify an explicit client id mode on each control or each naming container parent control to control how client ids are generated. By default, ASP.NET generates mangled client ids for any control contained in a naming container (like a Master Page, or a User Control for example). The key to ClientID management in ASP.NET 4.0 are the new ClientIDMode and ClientIDRowSuffix properties. ClientIDMode supports four different ClientID generation settings shown below. For the following examples, imagine that you have a Textbox control named txtName inside of a master page control container on a WebForms page. <%@Page Language="C#"      MasterPageFile="~/Site.Master"     CodeBehind="WebForm2.aspx.cs"     Inherits="WebApplication1.WebForm2"  %> <asp:Content ID="content"  ContentPlaceHolderID="content"               runat="server"               ClientIDMode="Static" >       <asp:TextBox runat="server" ID="txtName" /> </asp:Content> The four available ClientIDMode values are: AutoID This is the existing behavior in ASP.NET 1.x-3.x where full naming container munging takes place. <input name="ctl00$content$txtName" type="text"        id="ctl00_content_txtName" /> This should be familiar to any ASP.NET developer and results in fairly unpredictable client ids that can easily change if the containership hierarchy changes. For example, removing the master page changes the name in this case, so if you were to move a block of script code that works against the control to a non-Master page, the script code immediately breaks. Static This option is the most deterministic setting that forces the control’s ClientID to use its ID value directly. No naming container naming at all is applied and you end up with clean client ids: <input name="ctl00$content$txtName"         type="text" id="txtName" /> Note that the name property which is used for postback variables to the server still is munged, but the ClientID property is displayed simply as the ID value that you have assigned to the control. This option is what most of us want to use, but you have to be clear on that because it can potentially cause conflicts with other controls on the page. If there are several instances of the same naming container (several instances of the same user control for example) there can easily be a client id naming conflict. Note that if you assign Static to a data-bound control, like a list child control in templates, you do not get unique ids either, so for list controls where you rely on unique id for child controls, you’ll probably want to use Predictable rather than Static. I’ll write more on this a little later when I discuss ClientIDRowSuffix. Predictable The previous two values are pretty self-explanatory. Predictable however, requires some explanation. To me at least it’s not in the least bit predictable. MSDN defines this value as follows: This algorithm is used for controls that are in data-bound controls. The ClientID value is generated by concatenating the ClientID value of the parent naming container with the ID value of the control. If the control is a data-bound control that generates multiple rows, the value of the data field specified in the ClientIDRowSuffix property is added at the end. For the GridView control, multiple data fields can be specified. If the ClientIDRowSuffix property is blank, a sequential number is added at the end instead of a data-field value. Each segment is separated by an underscore character (_). The key that makes this value a bit confusing is that it relies on the parent NamingContainer’s ClientID to build its own ClientID value. This effectively means that the value is not predictable at all but rather very tightly coupled to the parent naming container’s ClientIDMode setting. For my simple textbox example, if the ClientIDMode property of the parent naming container (Page in this case) is set to “Predictable” you’ll get this: <input name="ctl00$content$txtName" type="text"         id="content_txtName" /> which gives an id that based on walking up to the currently active naming container (the MasterPage content container) and starting the id formatting from there downward. Think of this as a semi unique name that’s guaranteed unique only for the naming container. If, on the other hand, the Page is set to “AutoID” you get the following with Predictable on txtName: <input name="ctl00$content$txtName" type="text"         id="ctl00_content_txtName" /> The latter is effectively the same as if you specified AutoID because it inherits the AutoID naming from the Page and Content Master Page control of the page. But again - predictable behavior always depends on the parent naming container and how it generates its id, so the id may not always be exactly the same as the AutoID generated value because somewhere in the NamingContainer chain the ClientIDMode setting may be set to a different value. For example, if you had another naming container in the middle that was set to Static you’d end up effectively with an id that starts with the NamingContainers id rather than the whole ctl000_content munging. The most common use for Predictable is likely to be for data-bound controls, which results in each data bound item getting a unique ClientID. Unfortunately, even here the behavior can be very unpredictable depending on which data-bound control you use - I found significant differences in how template controls in a GridView behave from those that are used in a ListView control. For example, GridView creates clean child ClientIDs, while ListView still has a naming container in the ClientID, presumably because of the template container on which you can’t set ClientIDMode. Predictable is useful, but only if all naming containers down the chain use this setting. Otherwise you’re right back to the munged ids that are pretty unpredictable. Another property, ClientIDRowSuffix, can be used in combination with ClientIDMode of Predictable to force a suffix onto list client controls. For example: <asp:GridView runat="server" ID="gvItems"              AutoGenerateColumns="false"             ClientIDMode="Static"              ClientIDRowSuffix="Id">     <Columns>     <asp:TemplateField>         <ItemTemplate>             <asp:Label runat="server" id="txtName"                        Text='<%# Eval("Name") %>'                   ClientIDMode="Predictable"/>         </ItemTemplate>     </asp:TemplateField>     <asp:TemplateField>         <ItemTemplate>         <asp:Label runat="server" id="txtId"                     Text='<%# Eval("Id") %>'                     ClientIDMode="Predictable" />         </ItemTemplate>     </asp:TemplateField>     </Columns>  </asp:GridView> generates client Ids inside of a column in the master page described earlier: <td>     <span id="txtName_0">Rick</span> </td> where the value after the underscore is the ClientIDRowSuffix field - in this case “Id” of the item data bound to the control. Note that all of the child controls require ClientIDMode=”Predictable” in order for the ClientIDRowSuffix to be applied, and the parent GridView controls need to be set to Static either explicitly or via Naming Container inheritance to give these simple names. It’s a bummer that ClientIDRowSuffix doesn’t work with Static to produce this automatically. Another real problem is that other controls process the ClientIDMode differently. For example, a ListView control processes the Predictable ClientIDMode differently and produces the following with the Static ListView and Predictable child controls: <span id="ctrl0_txtName_0">Rick</span> I couldn’t even figure out a way using ClientIDMode to get a simple ID that also uses a suffix short of falling back to manually generated ids using <%= %> expressions instead. Given the inconsistencies inside of list controls using <%= %>, ids for the ListView might not be a bad idea anyway. Inherit The final setting is Inherit, which is the default for all controls except Page. This means that controls by default inherit the parent naming container’s ClientIDMode setting. For more detailed information on ClientID behavior and different scenarios you can check out a blog post of mine on this subject: http://www.west-wind.com/weblog/posts/54760.aspx. ClientID Enhancements Summary The ClientIDMode property is a welcome addition to ASP.NET 4.0. To me this is probably the most useful WebForms feature as it allows me to generate clean IDs simply by setting ClientIDMode="Static" on either the page or inside of Web.config (in the Pages section) which applies the setting down to the entire page which is my 95% scenario. For the few cases when it matters - for list controls and inside of multi-use user controls or custom server controls) - I can use Predictable or even AutoID to force controls to unique names. For application-level page development, this is easy to accomplish and provides maximum usability for working with client script code against page controls. ViewStateMode Another area of large criticism for WebForms is ViewState. ViewState is used internally by ASP.NET to persist page-level changes to non-postback properties on controls as pages post back to the server. It’s a useful mechanism that works great for the overall mechanics of WebForms, but it can also cause all sorts of overhead for page operation as ViewState can very quickly get out of control and consume huge amounts of bandwidth in your page content. ViewState can also wreak havoc with client-side scripting applications that modify control properties that are tracked by ViewState, which can produce very unpredictable results on a Postback after client-side updates. Over the years in my own development, I’ve often turned off ViewState on pages to reduce overhead. Yes, you lose some functionality, but you can easily implement most of the common functionality in non-ViewState workarounds. Relying less on heavy ViewState controls and sticking with simpler controls or raw HTML constructs avoids getting around ViewState problems. In ASP.NET 3.x and prior, it wasn’t easy to control ViewState - you could turn it on or off and if you turned it off at the page or web.config level, you couldn’t turn it back on for specific controls. In short, it was an all or nothing approach. With ASP.NET 4.0, the new ViewStateMode property gives you more control. It allows you to disable ViewState globally either on the page or web.config level and then turn it back on for specific controls that might need it. ViewStateMode only works when EnableViewState="true" on the page or web.config level (which is the default). You can then use ViewStateMode of Disabled, Enabled or Inherit to control the ViewState settings on the page. If you’re shooting for minimal ViewState usage, the ideal situation is to set ViewStateMode to disabled on the Page or web.config level and only turn it back on particular controls: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"        ClientIDMode="Static"                ViewStateMode="Disabled"     EnableViewState="true"  %> <!-- this control has viewstate  --> <asp:TextBox runat="server" ID="txtName"  ViewStateMode="Enabled" />       <!-- this control has no viewstate - it inherits  from parent container --> <asp:TextBox runat="server" ID="txtAddress" /> Note that the EnableViewState="true" at the Page level isn’t required since it’s the default, but it’s important that the value is true. ViewStateMode has no effect if EnableViewState="false" at the page level. The main benefit of ViewStateMode is that it allows you to more easily turn off ViewState for most of the page and enable only a few key controls that might need it. For me personally, this is a perfect combination as most of my WebForm apps can get away without any ViewState at all. But some controls - especially third party controls - often don’t work well without ViewState enabled, and now it’s much easier to selectively enable controls rather than the old way, which required you to pretty much turn off ViewState for all controls that you didn’t want ViewState on. Inline HTML Encoding HTML encoding is an important feature to prevent cross-site scripting attacks in data entered by users on your site. In order to make it easier to create HTML encoded content, ASP.NET 4.0 introduces a new Expression syntax using <%: %> to encode string values. The encoding expression syntax looks like this: <%: "<script type='text/javascript'>" +     "alert('Really?');</script>" %> which produces properly encoded HTML: &lt;script type=&#39;text/javascript&#39; &gt;alert(&#39;Really?&#39;);&lt;/script&gt; Effectively this is a shortcut to: <%= HttpUtility.HtmlEncode( "<script type='text/javascript'>" + "alert('Really?');</script>") %> Of course the <%: %> syntax can also evaluate expressions just like <%= %> so the more common scenario applies this expression syntax against data your application is displaying. Here’s an example displaying some data model values: <%: Model.Address.Street %> This snippet shows displaying data from your application’s data store or more importantly, from data entered by users. Anything that makes it easier and less verbose to HtmlEncode text is a welcome addition to avoid potential cross-site scripting attacks. Although I listed Inline HTML Encoding here under WebForms, anything that uses the WebForms rendering engine including ASP.NET MVC, benefits from this feature. ScriptManager Enhancements The ASP.NET ScriptManager control in the past has introduced some nice ways to take programmatic and markup control over script loading, but there were a number of shortcomings in this control. The ASP.NET 4.0 ScriptManager has a number of improvements that make it easier to control script loading and addresses a few of the shortcomings that have often kept me from using the control in favor of manual script loading. The first is the AjaxFrameworkMode property which finally lets you suppress loading the ASP.NET AJAX runtime. Disabled doesn’t load any ASP.NET AJAX libraries, but there’s also an Explicit mode that lets you pick and choose the library pieces individually and reduce the footprint of ASP.NET AJAX script included if you are using the library. There’s also a new EnableCdn property that forces any script that has a new WebResource attribute CdnPath property set to a CDN supplied URL. If the script has this Attribute property set to a non-null/empty value and EnableCdn is enabled on the ScriptManager, that script will be served from the specified CdnPath. [assembly: WebResource(    "Westwind.Web.Resources.ww.jquery.js",    "application/x-javascript",    CdnPath =  "http://mysite.com/scripts/ww.jquery.min.js")] Cool, but a little too static for my taste since this value can’t be changed at runtime to point at a debug script as needed, for example. Assembly names for loading scripts from resources can now be simple names rather than fully qualified assembly names, which make it less verbose to reference scripts from assemblies loaded from your bin folder or the assembly reference area in web.config: <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <Scripts>         <asp:ScriptReference          Name="Westwind.Web.Resources.ww.jquery.js"         Assembly="Westwind.Web" />     </Scripts>        </asp:ScriptManager> The ScriptManager in 4.0 also supports script combining via the CompositeScript tag, which allows you to very easily combine scripts into a single script resource served via ASP.NET. Even nicer: You can specify the URL that the combined script is served with. Check out the following script manager markup that combines several static file scripts and a script resource into a single ASP.NET served resource from a static URL (allscripts.js): <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <CompositeScript          Path="~/scripts/allscripts.js">         <Scripts>             <asp:ScriptReference                    Path="~/scripts/jquery.js" />             <asp:ScriptReference                    Path="~/scripts/ww.jquery.js" />             <asp:ScriptReference            Name="Westwind.Web.Resources.editors.js"                 Assembly="Westwind.Web" />         </Scripts>     </CompositeScript> </asp:ScriptManager> When you render this into HTML, you’ll see a single script reference in the page: <script src="scripts/allscripts.debug.js"          type="text/javascript"></script> All you need to do to make this work is ensure that allscripts.js and allscripts.debug.js exist in the scripts folder of your application - they can be empty but the file has to be there. This is pretty cool, but you want to be real careful that you use unique URLs for each combination of scripts you combine or else browser and server caching will easily screw you up royally. The script manager also allows you to override native ASP.NET AJAX scripts now as any script references defined in the Scripts section of the ScriptManager trump internal references. So if you want custom behavior or you want to fix a possible bug in the core libraries that normally are loaded from resources, you can now do this simply by referencing the script resource name in the Name property and pointing at System.Web for the assembly. Not a common scenario, but when you need it, it can come in real handy. Still, there are a number of shortcomings in this control. For one, the ScriptManager and ClientScript APIs still have no common entry point so control developers are still faced with having to check and support both APIs to load scripts so that controls can work on pages that do or don’t have a ScriptManager on the page. The CdnUrl is static and compiled in, which is very restrictive. And finally, there’s still no control over where scripts get loaded on the page - ScriptManager still injects scripts into the middle of the HTML markup rather than in the header or optionally the footer. This, in turn, means there is little control over script loading order, which can be problematic for control developers. MetaDescription, MetaKeywords Page Properties There are also a number of additional Page properties that correspond to some of the other features discussed in this column: ClientIDMode, ClientTarget and ViewStateMode. Another minor but useful feature is that you can now directly access the MetaDescription and MetaKeywords properties on the Page object to set the corresponding meta tags programmatically. Updating these values programmatically previously required either <%= %> expressions in the page markup or dynamic insertion of literal controls into the page. You can now just set these properties programmatically on the Page object in any Control derived class on the page or the Page itself: Page.MetaKeywords = "ASP.NET,4.0,New Features"; Page.MetaDescription = "This article discusses the new features in ASP.NET 4.0"; Note, that there’s no corresponding ASP.NET tag for the HTML Meta element, so the only way to specify these values in markup and access them is via the @Page tag: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"      ClientIDMode="Static"                MetaDescription="Article that discusses what's                      new in ASP.NET 4.0"     MetaKeywords="ASP.NET,4.0,New Features" %> Nothing earth shattering but quite convenient. Visual Studio 2010 Enhancements for Web Development For Web development there are also a host of editor enhancements in Visual Studio 2010. Some of these are not Web specific but they are useful for Web developers in general. Text Editors Throughout Visual Studio 2010, the text editors have all been updated to a new core engine based on WPF which provides some interesting new features for various code editors including the nice ability to zoom in and out with Ctrl-MouseWheel to quickly change the size of text. There are many more API options to control the editor and although Visual Studio 2010 doesn’t yet use many of these features, we can look forward to enhancements in add-ins and future editor updates from the various language teams that take advantage of the visual richness that WPF provides to editing. On the negative side, I’ve noticed that occasionally the code editor and especially the HTML and JavaScript editors will lose the ability to use various navigation keys like arrows, back and delete keys, which requires closing and reopening the documents at times. This issue seems to be well documented so I suspect this will be addressed soon with a hotfix or within the first service pack. Overall though, the code editors work very well, especially given that they were re-written completely using WPF, which was one of my big worries when I first heard about the complete redesign of the editors. Multi-Targeting Visual Studio now targets all versions of the .NET framework from 2.0 forward. You can use Visual Studio 2010 to work on your ASP.NET 2, 3.0 and 3.5 applications which is a nice way to get your feet wet with the new development environment without having to make changes to existing applications. It’s nice to have one tool to work in for all the different versions. Multi-Monitor Support One cool feature of Visual Studio 2010 is the ability to drag windows out of the Visual Studio environment and out onto the desktop including onto another monitor easily. Since Web development often involves working with a host of designers at the same time - visual designer, HTML markup window, code behind and JavaScript editor - it’s really nice to be able to have a little more screen real estate to work on each of these editors. Microsoft made a welcome change in the environment. IntelliSense Snippets for HTML and JavaScript Editors The HTML and JavaScript editors now finally support IntelliSense scripts to create macro-based template expansions that have been in the core C# and Visual Basic code editors since Visual Studio 2005. Snippets allow you to create short XML-based template definitions that can act as static macros or real templates that can have replaceable values that can be embedded into the expanded text. The XML syntax for these snippets is straight forward and it’s pretty easy to create custom snippets manually. You can easily create snippets using XML and store them in your custom snippets folder (C:\Users\rstrahl\Documents\Visual Studio 2010\Code Snippets\Visual Web Developer\My HTML Snippets and My JScript Snippets), but it helps to use one of the third-party tools that exist to simplify the process for you. I use SnippetEditor, by Bill McCarthy, which makes short work of creating snippets interactively (http://snippeteditor.codeplex.com/). Note: You may have to manually add the Visual Studio 2010 User specific Snippet folders to this tool to see existing ones you’ve created. Code snippets are some of the biggest time savers and HTML editing more than anything deals with lots of repetitive tasks that lend themselves to text expansion. Visual Studio 2010 includes a slew of built-in snippets (that you can also customize!) and you can create your own very easily. If you haven’t done so already, I encourage you to spend a little time examining your coding patterns and find the repetitive code that you write and convert it into snippets. I’ve been using CodeRush for this for years, but now you can do much of the basic expansion natively for HTML and JavaScript snippets. jQuery Integration Is Now Native jQuery is a popular JavaScript library and recently Microsoft has recently stated that it will become the primary client-side scripting technology to drive higher level script functionality in various ASP.NET Web projects that Microsoft provides. In Visual Studio 2010, the default full project template includes jQuery as part of a new project including the support files that provide IntelliSense (-vsdoc files). IntelliSense support for jQuery is now also baked into Visual Studio 2010, so unlike Visual Studio 2008 which required a separate download, no further installs are required for a rich IntelliSense experience with jQuery. Summary ASP.NET 4.0 brings many useful improvements to the platform, but thankfully most of the changes are incremental changes that don’t compromise backwards compatibility and they allow developers to ease into the new features one feature at a time. None of the changes in ASP.NET 4.0 or Visual Studio 2010 are monumental or game changers. The bigger features are language and .NET Framework changes that are also optional. This ASP.NET and tools release feels more like fine tuning and getting some long-standing kinks worked out of the platform. It shows that the ASP.NET team is dedicated to paying attention to community feedback and responding with changes to the platform and development environment based on this feedback. If you haven’t gotten your feet wet with ASP.NET 4.0 and Visual Studio 2010, there’s no reason not to give it a shot now - the ASP.NET 4.0 platform is solid and Visual Studio 2010 works very well for a brand new release. Check it out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • An Xml Serializable PropertyBag Dictionary Class for .NET

    - by Rick Strahl
    I don't know about you but I frequently need property bags in my applications to store and possibly cache arbitrary data. Dictionary<T,V> works well for this although I always seem to be hunting for a more specific generic type that provides a string key based dictionary. There's string dictionary, but it only works with strings. There's Hashset<T> but it uses the actual values as keys. In most key value pair situations for me string is key value to work off. Dictionary<T,V> works well enough, but there are some issues with serialization of dictionaries in .NET. The .NET framework doesn't do well serializing IDictionary objects out of the box. The XmlSerializer doesn't support serialization of IDictionary via it's default serialization, and while the DataContractSerializer does support IDictionary serialization it produces some pretty atrocious XML. What doesn't work? First off Dictionary serialization with the Xml Serializer doesn't work so the following fails: [TestMethod] public void DictionaryXmlSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXml(bag)); } public string ToXml(object obj) { if (obj == null) return null; StringWriter sw = new StringWriter(); XmlSerializer ser = new XmlSerializer(obj.GetType()); ser.Serialize(sw, obj); return sw.ToString(); } The error you get with this is: System.NotSupportedException: The type System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.Object, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] is not supported because it implements IDictionary. Got it! BTW, the same is true with binary serialization. Running the same code above against the DataContractSerializer does work: [TestMethod] public void DictionaryDataContextSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXmlDcs(bag)); } public string ToXmlDcs(object value, bool throwExceptions = false) { var ser = new DataContractSerializer(value.GetType(), null, int.MaxValue, true, false, null); MemoryStream ms = new MemoryStream(); ser.WriteObject(ms, value); return Encoding.UTF8.GetString(ms.ToArray(), 0, (int)ms.Length); } This DOES work but produces some pretty heinous XML (formatted with line breaks and indentation here): <ArrayOfKeyValueOfstringanyType xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <KeyValueOfstringanyType> <Key>key</Key> <Value i:type="a:string" xmlns:a="http://www.w3.org/2001/XMLSchema">Value</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key2</Key> <Value i:type="a:decimal" xmlns:a="http://www.w3.org/2001/XMLSchema">100.10</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key3</Key> <Value i:type="a:guid" xmlns:a="http://schemas.microsoft.com/2003/10/Serialization/">2cd46d2a-a636-4af4-979b-e834d39b6d37</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key4</Key> <Value i:type="a:dateTime" xmlns:a="http://www.w3.org/2001/XMLSchema">2011-09-19T17:17:05.4406999-07:00</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key5</Key> <Value i:type="a:boolean" xmlns:a="http://www.w3.org/2001/XMLSchema">true</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key7</Key> <Value i:type="a:base64Binary" xmlns:a="http://www.w3.org/2001/XMLSchema">Ki1C</Value> </KeyValueOfstringanyType> </ArrayOfKeyValueOfstringanyType> Ouch! That seriously hurts the eye! :-) Worse though it's extremely verbose with all those repetitive namespace declarations. It's good to know that it works in a pinch, but for a human readable/editable solution or something lightweight to store in a database it's not quite ideal. Why should I care? As a little background, in one of my applications I have a need for a flexible property bag that is used on a free form database field on an otherwise static entity. Basically what I have is a standard database record to which arbitrary properties can be added in an XML based string field. I intend to expose those arbitrary properties as a collection from field data stored in XML. The concept is pretty simple: When loading write the data to the collection, when the data is saved serialize the data into an XML string and store it into the database. When reading the data pick up the XML and if the collection on the entity is accessed automatically deserialize the XML into the Dictionary. (I'll talk more about this in another post). While the DataContext Serializer would work, it's verbosity is problematic both for size of the generated XML strings and the fact that users can manually edit this XML based property data in an advanced mode. A clean(er) layout certainly would be preferable and more user friendly. Custom XMLSerialization with a PropertyBag Class So… after a bunch of experimentation with different serialization formats I decided to create a custom PropertyBag class that provides for a serializable Dictionary. It's basically a custom Dictionary<TType,TValue> implementation with the keys always set as string keys. The result are PropertyBag<TValue> and PropertyBag (which defaults to the object type for values). The PropertyBag<TType> and PropertyBag classes provide these features: Subclassed from Dictionary<T,V> Implements IXmlSerializable with a cleanish XML format ToXml() and FromXml() methods to export and import to and from XML strings Static CreateFromXml() method to create an instance It's simple enough as it's merely a Dictionary<string,object> subclass but that supports serialization to a - what I think at least - cleaner XML format. The class is super simple to use: [TestMethod] public void PropertyBagTwoWayObjectSerializationTest() { var bag = new PropertyBag(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42,45,66 } ); bag.Add("Key8", null); bag.Add("Key9", new ComplexObject() { Name = "Rick", Entered = DateTime.Now, Count = 10 }); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag["key"] as string == "Value"); Assert.IsInstanceOfType( bag["Key3"], typeof(Guid)); Assert.IsNull(bag["Key8"]); //Assert.IsNull(bag["Key10"]); Assert.IsInstanceOfType(bag["Key9"], typeof(ComplexObject)); } This uses the PropertyBag class which uses a PropertyBag<string,object> - which means it returns untyped values of type object. I suspect for me this will be the most common scenario as I'd want to store arbitrary values in the PropertyBag rather than one specific type. The same code with a strongly typed PropertyBag<decimal> looks like this: [TestMethod] public void PropertyBagTwoWayValueTypeSerializationTest() { var bag = new PropertyBag<decimal>(); bag.Add("key", 10M); bag.Add("Key1", 100.10M); bag.Add("Key2", 200.10M); bag.Add("Key3", 300.10M); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag.Get("Key1") == 100.10M); Assert.IsTrue(bag.Get("Key3") == 300.10M); } and produces typed results of type decimal. The types can be either value or reference types the combination of which actually proved to be a little more tricky than anticipated due to null and specific string value checks required - getting the generic typing right required use of default(T) and Convert.ChangeType() to trick the compiler into playing nice. Of course the whole raison d'etre for this class is the XML serialization. You can see in the code above that we're doing a .ToXml() and .FromXml() to serialize to and from string. The XML produced for the first example looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value>Value</value> </item> <item> <key>Key2</key> <value type="decimal">100.10</value> </item> <item> <key>Key3</key> <value type="___System.Guid"> <guid>f7a92032-0c6d-4e9d-9950-b15ff7cd207d</guid> </value> </item> <item> <key>Key4</key> <value type="datetime">2011-09-26T17:45:58.5789578-10:00</value> </item> <item> <key>Key5</key> <value type="boolean">true</value> </item> <item> <key>Key7</key> <value type="base64Binary">Ki1C</value> </item> <item> <key>Key8</key> <value type="nil" /> </item> <item> <key>Key9</key> <value type="___Westwind.Tools.Tests.PropertyBagTest+ComplexObject"> <ComplexObject> <Name>Rick</Name> <Entered>2011-09-26T17:45:58.5789578-10:00</Entered> <Count>10</Count> </ComplexObject> </value> </item> </properties>   The format is a bit cleaner than the DataContractSerializer. Each item is serialized into <key> <value> pairs. If the value is a string no type information is written. Since string tends to be the most common type this saves space and serialization processing. All other types are attributed. Simple types are mapped to XML types so things like decimal, datetime, boolean and base64Binary are encoded using their Xml type values. All other types are embedded with a hokey format that describes the .NET type preceded by a three underscores and then are encoded using the XmlSerializer. You can see this best above in the ComplexObject encoding. For custom types this isn't pretty either, but it's more concise than the DCS and it works as long as you're serializing back and forth between .NET clients at least. The XML generated from the second example that uses PropertyBag<decimal> looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value type="decimal">10</value> </item> <item> <key>Key1</key> <value type="decimal">100.10</value> </item> <item> <key>Key2</key> <value type="decimal">200.10</value> </item> <item> <key>Key3</key> <value type="decimal">300.10</value> </item> </properties>   How does it work As I mentioned there's nothing fancy about this solution - it's little more than a subclass of Dictionary<T,V> that implements custom Xml Serialization and a couple of helper methods that facilitate getting the XML in and out of the class more easily. But it's proven very handy for a number of projects for me where dynamic data storage is required. Here's the code: /// <summary> /// Creates a serializable string/object dictionary that is XML serializable /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> [XmlRoot("properties")] public class PropertyBag : PropertyBag<object> { /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml">Serialize</param> /// <returns></returns> public static PropertyBag CreateFromXml(string xml) { var bag = new PropertyBag(); bag.FromXml(xml); return bag; } } /// <summary> /// Creates a serializable string for generic types that is XML serializable. /// /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> /// <typeparam name="TValue">Must be a reference type. For value types use type object</typeparam> [XmlRoot("properties")] public class PropertyBag<TValue> : Dictionary<string, TValue>, IXmlSerializable { /// <summary> /// Not implemented - this means no schema information is passed /// so this won't work with ASMX/WCF services. /// </summary> /// <returns></returns> public System.Xml.Schema.XmlSchema GetSchema() { return null; } /// <summary> /// Serializes the dictionary to XML. Keys are /// serialized to element names and values as /// element values. An xml type attribute is embedded /// for each serialized element - a .NET type /// element is embedded for each complex type and /// prefixed with three underscores. /// </summary> /// <param name="writer"></param> public void WriteXml(System.Xml.XmlWriter writer) { foreach (string key in this.Keys) { TValue value = this[key]; Type type = null; if (value != null) type = value.GetType(); writer.WriteStartElement("item"); writer.WriteStartElement("key"); writer.WriteString(key as string); writer.WriteEndElement(); writer.WriteStartElement("value"); string xmlType = XmlUtils.MapTypeToXmlType(type); bool isCustom = false; // Type information attribute if not string if (value == null) { writer.WriteAttributeString("type", "nil"); } else if (!string.IsNullOrEmpty(xmlType)) { if (xmlType != "string") { writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } } else { isCustom = true; xmlType = "___" + value.GetType().FullName; writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } // Actual deserialization if (!isCustom) { if (value != null) writer.WriteValue(value); } else { XmlSerializer ser = new XmlSerializer(value.GetType()); ser.Serialize(writer, value); } writer.WriteEndElement(); // value writer.WriteEndElement(); // item } } /// <summary> /// Reads the custom serialized format /// </summary> /// <param name="reader"></param> public void ReadXml(System.Xml.XmlReader reader) { this.Clear(); while (reader.Read()) { if (reader.NodeType == XmlNodeType.Element && reader.Name == "key") { string xmlType = null; string name = reader.ReadElementContentAsString(); // item element reader.ReadToNextSibling("value"); if (reader.MoveToNextAttribute()) xmlType = reader.Value; reader.MoveToContent(); TValue value; if (xmlType == "nil") value = default(TValue); // null else if (string.IsNullOrEmpty(xmlType)) { // value is a string or object and we can assign TValue to value string strval = reader.ReadElementContentAsString(); value = (TValue) Convert.ChangeType(strval, typeof(TValue)); } else if (xmlType.StartsWith("___")) { while (reader.Read() && reader.NodeType != XmlNodeType.Element) { } Type type = ReflectionUtils.GetTypeFromName(xmlType.Substring(3)); //value = reader.ReadElementContentAs(type,null); XmlSerializer ser = new XmlSerializer(type); value = (TValue)ser.Deserialize(reader); } else value = (TValue)reader.ReadElementContentAs(XmlUtils.MapXmlTypeToType(xmlType), null); this.Add(name, value); } } } /// <summary> /// Serializes this dictionary to an XML string /// </summary> /// <returns>XML String or Null if it fails</returns> public string ToXml() { string xml = null; SerializationUtils.SerializeObject(this, out xml); return xml; } /// <summary> /// Deserializes from an XML string /// </summary> /// <param name="xml"></param> /// <returns>true or false</returns> public bool FromXml(string xml) { this.Clear(); // if xml string is empty we return an empty dictionary if (string.IsNullOrEmpty(xml)) return true; var result = SerializationUtils.DeSerializeObject(xml, this.GetType()) as PropertyBag<TValue>; if (result != null) { foreach (var item in result) { this.Add(item.Key, item.Value); } } else // null is a failure return false; return true; } /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml"></param> /// <returns></returns> public static PropertyBag<TValue> CreateFromXml(string xml) { var bag = new PropertyBag<TValue>(); bag.FromXml(xml); return bag; } } } The code uses a couple of small helper classes SerializationUtils and XmlUtils for mapping Xml types to and from .NET, both of which are from the WestWind,Utilities project (which is the same project where PropertyBag lives) from the West Wind Web Toolkit. The code implements ReadXml and WriteXml for the IXmlSerializable implementation using old school XmlReaders and XmlWriters (because it's pretty simple stuff - no need for XLinq here). Then there are two helper methods .ToXml() and .FromXml() that basically allow your code to easily convert between XML and a PropertyBag object. In my code that's what I use to actually to persist to and from the entity XML property during .Load() and .Save() operations. It's sweet to be able to have a string key dictionary and then be able to turn around with 1 line of code to persist the whole thing to XML and back. Hopefully some of you will find this class as useful as I've found it. It's a simple solution to a common requirement in my applications and I've used the hell out of it in the  short time since I created it. Resources You can find the complete code for the two classes plus the helpers in the Subversion repository for Westwind.Utilities. You can grab the source files from there or download the whole project. You can also grab the full Westwind.Utilities assembly from NuGet and add it to your project if that's easier for you. PropertyBag Source Code SerializationUtils and XmlUtils Westwind.Utilities Assembly on NuGet (add from Visual Studio) © Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Records not being saved to core data sqlite file

    - by esd100
    I'm a complete newbie when it comes to iOS programming and much less Core Data. It's rather non-intuitive for me, as I really came into my own with programming with MATLAB, which I guess is more of a 'scripting' language. At any rate, my problem is that I had no idea what I had to do to create a database for my application. So I read a little bit and thought I had to create a SQL database of my stuff and then import it. Long story short, I created a SQLite db and I want to use the work I have already done to import stuff into my CoreData database. I tried exporting to comma-delimited files and xml files and reading those in, but I didn't like it and it seemed like an extra step that I shouldn't need to do. So, I imported the SQLite database into my resources and added the sqlite framework. I have my core data model setup and it is setting up the SQLite database for the model correctly in the background. When I run through my program to add objects to my entities, it seems to work and I can even fetch results afterward. However, when I inspect the Core Data Database SQLite file, no records have been saved. How is it possible for it to fetch results but not save them to the database? - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions{ //load in the path for resources NSString *paths = [[NSBundle mainBundle] resourcePath]; NSString *databaseName = @"histology.sqlite"; NSString *databasePath = [paths stringByAppendingPathComponent:databaseName]; [self createDatabase:databasePath ]; NSError *error; if ([[self managedObjectContext] save:&error]) { NSLog(@"Whoops, couldn't save: %@", [error localizedDescription]); } // Test listing all CELLS from the store NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entityMO = [NSEntityDescription entityForName:@"CELL" inManagedObjectContext:[self managedObjectContext]]; [fetchRequest setEntity:entityMO]; NSArray *fetchedObjects = [[self managedObjectContext] executeFetchRequest:fetchRequest error:&error]; for (CELL *cellName in fetchedObjects) { //NSLog(@"cellName: %@", cellName); } -(void) createDatabase:databasePath { NSLog(@"The createDatabase function was entered."); NSLog(@"The databasePath is %@ ",[databasePath description]); // Setup the database object sqlite3 *histoDatabase; // Open the database from filessytem if(sqlite3_open([databasePath UTF8String], &histoDatabase) == SQLITE_OK) { NSLog(@"The database was opened"); // Setup the SQL Statement and compile it for faster access const char *sqlStatement = "SELECT * FROM CELL"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(histoDatabase, sqlStatement, -1, &compiledStatement, NULL) != SQLITE_OK) { NSAssert1(0, @"Error while creating add statement. '%s'", sqlite3_errmsg(histoDatabase)); } if(sqlite3_prepare_v2(histoDatabase, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { // Loop through the results and add them to cell MO array while(sqlite3_step(compiledStatement) == SQLITE_ROW) { CELL *cellMO = [NSEntityDescription insertNewObjectForEntityForName:@"CELL" inManagedObjectContext:[self managedObjectContext]]; if (sqlite3_column_type(compiledStatement, 0) != SQLITE_NULL) { cellMO.cellName = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 0)]; } else { cellMO.cellName = @"undefined"; } if (sqlite3_column_type(compiledStatement, 1) != SQLITE_NULL) { cellMO.cellDescription = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 1)]; } else { cellMO.cellDescription = @"undefined"; } NSLog(@"The contents of NSString *cellName = %@",[cellMO.cellName description]); } } // Release the compiled statement from memory sqlite3_finalize(compiledStatement); } sqlite3_close(histoDatabase); } I have a feeling that it has something to do with the timing of opening/closing both of the databases? Attached I have some SQL debugging output to the terminal 2012-05-28 16:03:39.556 MedPix[34751:fb03] The createDatabase function was entered. 2012-05-28 16:03:39.557 MedPix[34751:fb03] The databasePath is /Users/jack/Library/Application Support/iPhone Simulator/5.1/Applications/A6B2A79D-BA93-4E24-9291-5B7948A3CDF4/MedPix.app/histology.sqlite 2012-05-28 16:03:39.559 MedPix[34751:fb03] The database was opened 2012-05-28 16:03:39.560 MedPix[34751:fb03] The database was prepared 2012-05-28 16:03:39.575 MedPix[34751:fb03] CoreData: annotation: Connecting to sqlite database file at "/Users/jack/Library/Application Support/iPhone Simulator/5.1/Applications/A6B2A79D-BA93-4E24-9291-5B7948A3CDF4/Documents/MedPix.sqlite" 2012-05-28 16:03:39.576 MedPix[34751:fb03] CoreData: annotation: creating schema. 2012-05-28 16:03:39.577 MedPix[34751:fb03] CoreData: sql: pragma page_size=4096 2012-05-28 16:03:39.578 MedPix[34751:fb03] CoreData: sql: pragma auto_vacuum=2 2012-05-28 16:03:39.630 MedPix[34751:fb03] CoreData: sql: BEGIN EXCLUSIVE 2012-05-28 16:03:39.631 MedPix[34751:fb03] CoreData: sql: SELECT TBL_NAME FROM SQLITE_MASTER WHERE TBL_NAME = 'Z_METADATA' 2012-05-28 16:03:39.632 MedPix[34751:fb03] CoreData: sql: CREATE TABLE ZCELL ( Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, Z_OPT INTEGER, ZCELLDESCRIPTION VARCHAR, ZCELLNAME VARCHAR ) ... 2012-05-28 16:03:39.669 MedPix[34751:fb03] CoreData: annotation: Creating primary key table. 2012-05-28 16:03:39.671 MedPix[34751:fb03] CoreData: sql: CREATE TABLE Z_PRIMARYKEY (Z_ENT INTEGER PRIMARY KEY, Z_NAME VARCHAR, Z_SUPER INTEGER, Z_MAX INTEGER) 2012-05-28 16:03:39.672 MedPix[34751:fb03] CoreData: sql: INSERT INTO Z_PRIMARYKEY(Z_ENT, Z_NAME, Z_SUPER, Z_MAX) VALUES(1, 'CELL', 0, 0) ... 2012-05-28 16:03:39.701 MedPix[34751:fb03] CoreData: sql: CREATE TABLE Z_METADATA (Z_VERSION INTEGER PRIMARY KEY, Z_UUID VARCHAR(255), Z_PLIST BLOB) 2012-05-28 16:03:39.702 MedPix[34751:fb03] CoreData: sql: SELECT TBL_NAME FROM SQLITE_MASTER WHERE TBL_NAME = 'Z_METADATA' 2012-05-28 16:03:39.703 MedPix[34751:fb03] CoreData: sql: DELETE FROM Z_METADATA WHERE Z_VERSION = ? 2012-05-28 16:03:39.704 MedPix[34751:fb03] CoreData: sql: INSERT INTO Z_METADATA (Z_VERSION, Z_UUID, Z_PLIST) VALUES (?, ?, ?) 2012-05-28 16:03:39.705 MedPix[34751:fb03] CoreData: sql: COMMIT 2012-05-28 16:03:39.710 MedPix[34751:fb03] CoreData: sql: pragma cache_size=200 2012-05-28 16:03:39.711 MedPix[34751:fb03] CoreData: sql: SELECT Z_VERSION, Z_UUID, Z_PLIST FROM Z_METADATA 2012-05-28 16:03:39.712 MedPix[34751:fb03] The contents of NSString *cellName = Beta Cell 2012-05-28 16:03:39.712 MedPix[34751:fb03] The contents of NSString *cellName = Gastric Chief Cell ... 2012-05-28 16:03:39.714 MedPix[34751:fb03] The database was prepared 2012-05-28 16:03:39.764 MedPix[34751:fb03] The createDatabase function has finished. Now fetching. 2012-05-28 16:03:39.765 MedPix[34751:fb03] CoreData: sql: SELECT 0, t0.Z_PK, t0.Z_OPT, t0.ZCELLDESCRIPTION, t0.ZCELLNAME FROM ZCELL t0 2012-05-28 16:03:39.766 MedPix[34751:fb03] CoreData: annotation: sql connection fetch time: 0.0008s 2012-05-28 16:03:39.767 MedPix[34751:fb03] CoreData: annotation: total fetch execution time: 0.0016s for 0 rows. 2012-05-28 16:03:39.768 MedPix[34751:fb03] cellName: <CELL: 0x6bbc120> (entity: CELL; id: 0x6bbc160 <x-coredata:///CELL/t57D10DDD-74E2-474F-97EE-E3BD0FF684DA34> ; data: { cellDescription = "S cells are cells which release secretin, found in the jejunum and duodenum. They are stimulated by a drop in pH to 4 or below in the small intestine's lumen. The released secretin will increase the s"; cellName = "S Cell"; organs = ( ); specimens = ( ); systems = ( ); tissues = ( ); }) ... Sections were cut short to abbreviate. But note that the fetch results contain information, but it says that total fetch execution was for "0" rows? How can that be? Any help will be greatly appreciated, especially detailed explanations. :) Thanks.

    Read the article

  • Problem with onRetainNonConfigurationInstance

    - by David
    I am writing a small app using the Android SDK, 1.6 target, and the Eclipse plug-in. I have layouts for both portrait and landscape mode, and most everything is working well. I say most because I am having issues with the orientation change. One part of the app has a ListView "on top of" another section. That section consists of 4 checkboxes, a button, and some TextViews. That is the portrait version. The landscape version replaces the ListView with a Spinner and rearranges some of the other components (but leaves the ALL resource ids the same). While in either orientation things work like they should. It's when the app switches orientation that things go off. Only 1 of the checkboxes maintains it's state throughout both layout changes. The other three CBs only maintain their state when going from landscape-portrait. I am also having problem getting the ListView/Spinner to correctly set themselves on changing. I am using onRetainNonConfigurationInstance() and creating a custom object that is returned. When I step through the code during a orientation change, the custom object is successfully pulled back out the the ether, and the widgets are being set to the correct values (inspecting them). But for some reason, once the onCreate is done, the checkboxes are not set to true. public class SkillSelectionActivity extends Activity { private Button rollDiceButton; private ListView skillListView; private CheckBox makeCommonCB; private CheckBox useEdgeCB; private CheckBox useSpecializationCB; private CheckBox isExtendedCB; private TextView skillNameView; private TextView skillRanksView; private TextView rollResultView; private TextView rollSuccessesView; private TextView rollFailuresView; private TextView extendedTestTotalView; private TextView extendedTestTimeView; private TextView skillSpecNameView; private int extendedTestTotal = 0; private int extendedTestTime = 0; private Skill currentSkill; private int currentPosition = 0; private SRCharacter character; private int skillSelectionType; private Spinner skillSpinnerView; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.skill_selection2); Intent intent = getIntent(); Bundle extras = intent.getExtras(); skillSelectionType = extras.getInt("SKILL_SELECTION"); skillListView = (ListView) findViewById(R.id.skillList); skillSpinnerView = (Spinner) findViewById(R.id.skillSpinner); rollDiceButton = (Button) findViewById(R.id.rollDiceButton); makeCommonCB = (CheckBox) findViewById(R.id.makeCommonCB); useEdgeCB = (CheckBox) findViewById(R.id.useEdgeCB); useSpecializationCB = (CheckBox) findViewById(R.id.useSpecializationCB); isExtendedCB = (CheckBox) findViewById(R.id.extendedTestCB); skillNameView = (TextView) findViewById(R.id.skillName); skillRanksView = (TextView) findViewById(R.id.skillRanks); rollResultView = (TextView) findViewById(R.id.rollResult); rollSuccessesView = (TextView) findViewById(R.id.rollSuccesses); rollFailuresView = (TextView) findViewById(R.id.rollFailures); extendedTestTotalView = (TextView) findViewById(R.id.extendedTestTotal); extendedTestTimeView = (TextView) findViewById(R.id.extendedTestTime); skillSpecNameView = (TextView) findViewById(R.id.skillSpecName); character = ((SR4DR) getApplication()).getCharacter(); ConfigSaver data = (ConfigSaver) getLastNonConfigurationInstance(); if (data == null) { makeCommonCB.setChecked(false); useEdgeCB.setChecked(false); useSpecializationCB.setChecked(false); isExtendedCB.setChecked(false); currentSkill = null; } else { currentSkill = data.getSkill(); currentPosition = data.getPosition(); useEdgeCB.setChecked(data.isEdge()); useSpecializationCB.setChecked(data.isSpec()); isExtendedCB.setChecked(data.isExtended()); makeCommonCB.setChecked(data.isCommon()); if (skillSpinnerView != null) { skillSpinnerView.setSelection(currentPosition); } if (skillListView != null) { skillListView.setSelection(currentPosition); } } // Register handler for UI elements rollDiceButton.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { // guts removed for clarity } }); makeCommonCB.setOnCheckedChangeListener(new OnCheckedChangeListener() { public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) { // guts removed for clarity } }); isExtendedCB.setOnCheckedChangeListener(new OnCheckedChangeListener() { public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) { // guts removed for clarity } }); useEdgeCB.setOnCheckedChangeListener(new OnCheckedChangeListener() { public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) { // guts removed for clarity } }); useSpecializationCB.setOnCheckedChangeListener(new OnCheckedChangeListener() { public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) { // guts removed for clarity } }); if (skillListView != null) { skillListView.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> parent, View v, int position, long id) { // guts removed for clarity } }); } if (skillSpinnerView != null) { skillSpinnerView.setOnItemSelectedListener(new MyOnItemSelectedListener()); } populateSkillList(); } private void populateSkillList() { String[] list = character.getSkillNames(skillSelectionType); if (list == null) { list = new String[0]; } if (skillListView != null) { ArrayAdapter<String> adapter = new ArrayAdapter<String>(this, R.layout.list_item, list); skillListView.setAdapter(adapter); } if (skillSpinnerView != null) { ArrayAdapter<String> adapter = new ArrayAdapter<String>(this, android.R.layout.simple_spinner_item, list); adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); skillSpinnerView.setAdapter(adapter); } } public class MyOnItemSelectedListener implements OnItemSelectedListener { public void onItemSelected(AdapterView<?> parent, View view, int position, long id) { // guts removed for clarity } public void onNothingSelected(AdapterView<?> parent) { // Do nothing. } } @Override public Object onRetainNonConfigurationInstance() { ConfigSaver cs = new ConfigSaver(currentSkill, currentPosition, useEdgeCB.isChecked(), useSpecializationCB.isChecked(), makeCommonCB.isChecked(), isExtendedCB.isChecked()); return cs; } class ConfigSaver { private Skill skill = null; private int position = 0; private boolean edge; private boolean spec; private boolean common; private boolean extended; public ConfigSaver(Skill skill, int position, boolean useEdge, boolean useSpec, boolean isCommon, boolean isExt) { this.setSkill(skill); this.position = position; this.edge = useEdge; this.spec = useSpec; this.common = isCommon; this.extended = isExt; } // public getters and setters removed for clarity } }

    Read the article

  • LNK2005: delete already defined error in VC++

    - by user333422
    Hi, I am newbie to VC++, so please forgive me if this is stupid. I have got 7 solutions under one project. Six of these build static libraries which is linked inside 7th to produce an exe. The runtime configuration of all the projects is MultiThreaded Debug. sln which is used to produce exe is using MFC other slns use standard runtiem library. I tried changing those to MFC, but still I am getting the same error. All the six slns build successfully. When I try to build exe - get the follwoing error: nafxcwd.lib(afxmem.obj) : error LNK2005: "void __cdecl operator delete(void *,int,char const *,int)" (??3@YAXPAXHPBDH@Z) already defined in tara_common.lib(fileStream.obj) This is weird because tara_common is one of the libs that I generated and fileStream.cpp is file that is just using delete on a pointer. I built it in verbose mod, so I am attaching the output. ENVIRONMENT SPACE _ACP_ATLPROV=C:\Program Files\Microsoft Visual Studio 9.0\VC\Bin\ATLProv.dll _ACP_INCLUDE=C:\Program Files\Microsoft Visual Studio 9.0\VC\include;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\include;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include _ACP_LIB=C:\fta\tara\database\build\Debug;C:\Program Files\Microsoft Visual Studio 9.0\VC\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib\i386;C:\Program Files\Microsoft SDKs\Windows\v6.0A\lib;C:\Program Files\Microsoft SDKs\Windows\v6.0A\lib;C:\Program Files\Microsoft Visual Studio 9.0\;C:\Program Files\Microsoft Visual Studio 9.0\lib _ACP_PATH=C:\Program Files\Microsoft Visual Studio 9.0\VC\bin;C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin;C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools\bin;C:\Program Files\Microsoft Visual Studio 9.0\Common7\tools;C:\Program Files\Microsoft Visual Studio 9.0\Common7\ide;C:\Program Files\HTML Help Workshop;C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin;c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727;C:\Program Files\Microsoft Visual Studio 9.0\;C:\WINDOWS\SysWow64;;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;c:\Program Files\Microsoft SQL Server\90\Tools\binn\;C:\Program Files\TortoiseSVN\bin;C:\Program Files\Microsoft Visual Studio 9.0\VC\bin;C:\Program Files\GnuWin32\bin;C:\Python26 ALLUSERSPROFILE=C:\Documents and Settings\All Users CLIENTNAME=Console CommonProgramFiles=C:\Program Files\Common Files ComSpec=C:\WINDOWS\system32\cmd.exe FP_NO_HOST_CHECK=NO HOMEDRIVE=C: INCLUDE=C:\Program Files\Microsoft Visual Studio 9.0\VC\include;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\include;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include LIB=C:\fta\tara\database\build\Debug;C:\Program Files\Microsoft Visual Studio 9.0\VC\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib\i386;C:\Program Files\Microsoft SDKs\Windows\v6.0A\lib;C:\Program Files\Microsoft SDKs\Windows\v6.0A\lib;C:\Program Files\Microsoft Visual Studio 9.0\;C:\Program Files\Microsoft Visual Studio 9.0\lib LIBPATH=c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\lib LOGONSERVER=\xxx NUMBER_OF_PROCESSORS=1 OS=Windows_NT PATH=C:\Program Files\Microsoft Visual Studio 9.0\VC\bin;C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin;C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools\bin;C:\Program Files\Microsoft Visual Studio 9.0\Common7\tools;C:\Program Files\Microsoft Visual Studio 9.0\Common7\ide;C:\Program Files\HTML Help Workshop;C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin;c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727;C:\Program Files\Microsoft Visual Studio 9.0\;C:\WINDOWS\SysWow64;;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;c:\Program Files\Microsoft SQL Server\90\Tools\binn\;C:\Program Files\TortoiseSVN\bin;C:\Program Files\Microsoft Visual Studio 9.0\VC\bin;C:\Program Files\GnuWin32\bin;C:\Python26 PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH PROCESSOR_ARCHITECTURE=x86 PROCESSOR_IDENTIFIER=x86 Family 6 Model 23 Stepping 10, GenuineIntel PROCESSOR_LEVEL=6 PROCESSOR_REVISION=170a ProgramFiles=C:\Program Files SESSIONNAME=Console SystemDrive=C: SystemRoot=C:\WINDOWS VisualStudioDir=C:\Documents and Settings\sgupta\My Documents\Visual Studio 2008 VS90COMNTOOLS=C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools\ WecVersionForRosebud.710=2 windir=C:\WINDOWS COMMAND LINES: Creating temporary file "c:\fta\tools\channel_editor\IvoDB\Debug\RSP00011018082288.rsp" with contents [ /VERBOSE /OUT:"C:\fta\tools\channel_editor\Builds\IvoDB_1_35_Debug.exe" /INCREMENTAL /LIBPATH:"......\3rdparty\boost_1_42_0\stage\lib" /MANIFEST /MANIFESTFILE:"Debug\IvoDB_1_35_Debug.exe.intermediate.manifest" /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DELAYLOAD:"OleAcc.dll" /DEBUG /PDB:"C:\fta\tools\channel_editor\Builds\IvoDB_1_35_Debug.pdb" /SUBSYSTEM:WINDOWS /DYNAMICBASE /NXCOMPAT /MACHINE:X86 ......\tara\database\build\Debug\tara_database.lib ......\tara\common\build\Debug\tara_common.lib ......\3rdparty\sqliteWrapper\Debug\sqliteWrapper.lib ......\3rdparty\sqlite_3_6_18\Debug\sqlite.lib ......\stsdk\modules\win32\Debug\modules.lib ......\stsdk\axeapi\win32\Debug\axeapi.lib DelayImp.lib ".\Debug\AntennaSettings.obj" ".\Debug\AudioVideoSettings.obj" ".\Debug\CMDatabase.obj" ".\Debug\CMSettings.obj" ".\Debug\ColorFileDialog.obj" ".\Debug\ColorStatic.obj" ".\Debug\DragDropListCtrl.obj" ".\Debug\DragDropTreeCtrl.obj" ".\Debug\FavouriteEdit.obj" ".\Debug\FavTab.obj" ".\Debug\FindProgram.obj" ".\Debug\HyperLink.obj" ".\Debug\IvoDB.obj" ".\Debug\IvoDBDlg.obj" ".\Debug\IvoDBInfo.obj" ".\Debug\IvoDBInfoTab.obj" ".\Debug\IvoDbStruct.obj" ".\Debug\LayoutHelper.obj" ".\Debug\MainTab.obj" ".\Debug\OperTabCtrl.obj" ".\Debug\ParentalLock.obj" ".\Debug\ProgramEdit.obj" ".\Debug\ProgramTab.obj" ".\Debug\PVRSettings.obj" ".\Debug\SatTab.obj" ".\Debug\SettingsBase.obj" ".\Debug\SettingsTab.obj" ".\Debug\STBSettings.obj" ".\Debug\stdafx.obj" ".\Debug\TimeDate.obj" ".\Debug\TransponderEdit.obj" ".\Debug\TreeTab.obj" ".\Debug\UserPreferences.obj" ".\Debug\Xmodem.obj" ".\Debug\IvoDB.res" ".\Debug\IvoDB_1_35_Debug.exe.embed.manifest.res" ] Creating command line "link.exe @c:\fta\tools\channel_editor\IvoDB\Debug\RSP00011018082288.rsp /NOLOGO /ERRORREPORT:PROMPT" **Processed /DEFAULTLIB:atlsd.lib Processed /DEFAULTLIB:ws2_32.lib Processed /DEFAULTLIB:mswsock.lib Processed /DISALLOWLIB:mfc90d.lib Processed /DISALLOWLIB:mfcs90d.lib Processed /DISALLOWLIB:mfc90.lib Processed /DISALLOWLIB:mfcs90.lib Processed /DISALLOWLIB:mfc90ud.lib Processed /DISALLOWLIB:mfcs90ud.lib Processed /DISALLOWLIB:mfc90u.lib Processed /DISALLOWLIB:mfcs90u.lib Processed /DISALLOWLIB:uafxcwd.lib Processed /DISALLOWLIB:uafxcw.lib Processed /DISALLOWLIB:nafxcw.lib Found "void __cdecl operator delete(void *)" (??3@YAXPAX@Z) Referenced in axeapi.lib(ipcgeneric.obj) Referenced in axeapi.lib(ipccommon.obj) Referenced in axeapi.lib(activeobject.obj) Referenced in nafxcwd.lib(nolib.obj) Referenced in sqliteWrapper.lib(DbConnection.obj) Referenced in sqliteWrapper.lib(Statement.obj) Referenced in axeapi.lib(nvstorage.obj) Referenced in axeapi.lib(avctrler.obj) Referenced in tara_common.lib(trace.obj) Referenced in tara_common.lib(ssPrintf.obj) Referenced in tara_common.lib(taraConfig.obj) Referenced in tara_common.lib(stream.obj) Referenced in tara_common.lib(STBConfigurationStorage.obj) Referenced in tara_common.lib(STBConfiguration.obj) Referenced in tara_common.lib(configParser.obj) Referenced in tara_common.lib(fileStream.obj) Referenced in tara_database.lib(SatStream.obj) Referenced in tara_database.lib(Service.obj) Referenced in tara_database.lib(ServiceList.obj) Referenced in tara_common.lib(playerConfig.obj) Referenced in UserPreferences.obj Referenced in Xmodem.obj Referenced in tara_database.lib(init.obj) Referenced in tara_database.lib(Satellite.obj) Referenced in TransponderEdit.obj Referenced in TreeTab.obj Referenced in TreeTab.obj Referenced in UserPreferences.obj Referenced in stdafx.obj Referenced in TimeDate.obj Referenced in TimeDate.obj Referenced in TransponderEdit.obj Referenced in SettingsTab.obj Referenced in SettingsTab.obj Referenced in STBSettings.obj Referenced in STBSettings.obj Referenced in SatTab.obj Referenced in SatTab.obj Referenced in SettingsBase.obj Referenced in SettingsBase.obj Referenced in ProgramTab.obj Referenced in ProgramTab.obj Referenced in PVRSettings.obj Referenced in PVRSettings.obj Referenced in ParentalLock.obj Referenced in ParentalLock.obj Referenced in ProgramEdit.obj Referenced in ProgramEdit.obj Referenced in MainTab.obj Referenced in MainTab.obj Referenced in OperTabCtrl.obj Referenced in OperTabCtrl.obj Referenced in IvoDBInfoTab.obj Referenced in IvoDbStruct.obj Referenced in LayoutHelper.obj Referenced in LayoutHelper.obj Referenced in IvoDBDlg.obj Referenced in IvoDBInfo.obj Referenced in IvoDBInfo.obj Referenced in IvoDBInfoTab.obj Referenced in HyperLink.obj Referenced in IvoDB.obj Referenced in IvoDB.obj Referenced in IvoDBDlg.obj Referenced in FavTab.obj Referenced in FavTab.obj Referenced in FindProgram.obj Referenced in FindProgram.obj Referenced in DragDropTreeCtrl.obj Referenced in DragDropTreeCtrl.obj Referenced in FavouriteEdit.obj Referenced in FavouriteEdit.obj Referenced in ColorFileDialog.obj Referenced in ColorStatic.obj Referenced in DragDropListCtrl.obj Referenced in DragDropListCtrl.obj Referenced in CMDatabase.obj Referenced in CMDatabase.obj Referenced in CMSettings.obj Referenced in CMSettings.obj Referenced in AntennaSettings.obj Referenced in AntennaSettings.obj Referenced in AudioVideoSettings.obj Referenced in AudioVideoSettings.obj Loaded nafxcwd.lib(afxmem.obj) nafxcwd.lib(afxmem.obj) : error LNK2005: "void __cdecl operator delete(void *,int,char const *,int)" (??3@YAXPAXHPBDH@Z) already defined in tara_common.lib(fileStream.obj)** Please help me out, I have already wasted 3 days looking on net and trying all the possible solutions I found. thanks in advance, SG

    Read the article

  • How to use Ninject with XNA?

    - by Rosarch
    I'm having difficulty integrating Ninject with XNA. static class Program { /** * The main entry point for the application. */ static void Main(string[] args) { IKernel kernel = new StandardKernel(NinjectModuleManager.GetModules()); CachedContentLoader content = kernel.Get<CachedContentLoader>(); // stack overflow here MasterEngine game = kernel.Get<MasterEngine>(); game.Run(); } } // constructor for the game public MasterEngine(IKernel kernel) : base(kernel) { this.inputReader = kernel.Get<IInputReader>(); graphicsDeviceManager = kernel.Get<GraphicsDeviceManager>(); Components.Add(kernel.Get<GamerServicesComponent>()); // Tell the loader to look for all files relative to the "Content" directory. Assets = kernel.Get<CachedContentLoader>(); //Sets dimensions of the game window graphicsDeviceManager.PreferredBackBufferWidth = 800; graphicsDeviceManager.PreferredBackBufferHeight = 600; graphicsDeviceManager.ApplyChanges(); IsMouseVisible = false; } Ninject.cs: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Ninject.Modules; using HWAlphaRelease.Controller; using Microsoft.Xna.Framework; using Nuclex.DependencyInjection.Demo.Scaffolding; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; namespace HWAlphaRelease { public static class NinjectModuleManager { public static NinjectModule[] GetModules() { return new NinjectModule[1] { new GameModule() }; } /// <summary>Dependency injection rules for the main game instance</summary> public class GameModule : NinjectModule { #region class ServiceProviderAdapter /// <summary>Delegates to the game's built-in service provider</summary> /// <remarks> /// <para> /// When a class' constructor requires an IServiceProvider, the dependency /// injector cannot just construct a new one and wouldn't know that it has /// to create an instance of the Game class (or take it from the existing /// Game instance). /// </para> /// <para> /// The solution, then, is this small adapter that takes a Game instance /// and acts as if it was a freely constructable IServiceProvider implementation /// while in reality, it delegates all lookups to the Game's service container. /// </para> /// </remarks> private class ServiceProviderAdapter : IServiceProvider { /// <summary>Initializes a new service provider adapter for the game</summary> /// <param name="game">Game the service provider will be taken from</param> public ServiceProviderAdapter(Game game) { this.gameServices = game.Services; } /// <summary>Retrieves a service from the game service container</summary> /// <param name="serviceType">Type of the service that will be retrieved</param> /// <returns>The service that has been requested</returns> public object GetService(Type serviceType) { return this.gameServices; } /// <summary>Game services container of the Game instance</summary> private GameServiceContainer gameServices; } #endregion // class ServiceProviderAdapter #region class ContentManagerAdapter /// <summary>Delegates to the game's built-in ContentManager</summary> /// <remarks> /// This provides shared access to the game's ContentManager. A dependency /// injected class only needs to require the ISharedContentService in its /// constructor and the dependency injector will automatically resolve it /// to this adapter, which delegates to the Game's built-in content manager. /// </remarks> private class ContentManagerAdapter : ISharedContentService { /// <summary>Initializes a new shared content manager adapter</summary> /// <param name="game">Game the content manager will be taken from</param> public ContentManagerAdapter(Game game) { this.contentManager = game.Content; } /// <summary>Loads or accesses shared game content</summary> /// <typeparam name="AssetType">Type of the asset to be loaded or accessed</typeparam> /// <param name="assetName">Path and name of the requested asset</param> /// <returns>The requested asset from the the shared game content store</returns> public AssetType Load<AssetType>(string assetName) { return this.contentManager.Load<AssetType>(assetName); } /// <summary>The content manager this instance delegates to</summary> private ContentManager contentManager; } #endregion // class ContentManagerAdapter /// <summary>Initializes the dependency configuration</summary> public override void Load() { // Allows access to the game class for any components with a dependency // on the 'Game' or 'DependencyInjectionGame' classes. Bind<MasterEngine>().ToSelf().InSingletonScope(); Bind<NinjectGame>().To<MasterEngine>().InSingletonScope(); Bind<Game>().To<MasterEngine>().InSingletonScope(); // Let the dependency injector construct a graphics device manager for // all components depending on the IGraphicsDeviceService and // IGraphicsDeviceManager interfaces Bind<GraphicsDeviceManager>().ToSelf().InSingletonScope(); Bind<IGraphicsDeviceService>().To<GraphicsDeviceManager>().InSingletonScope(); Bind<IGraphicsDeviceManager>().To<GraphicsDeviceManager>().InSingletonScope(); // Some clever adapters that hand out the Game's IServiceProvider and allow // access to its built-in ContentManager Bind<IServiceProvider>().To<ServiceProviderAdapter>().InSingletonScope(); Bind<ISharedContentService>().To<ContentManagerAdapter>().InSingletonScope(); Bind<IInputReader>().To<UserInputReader>().InSingletonScope().WithConstructorArgument("keyMapping", Constants.DEFAULT_KEY_MAPPING); Bind<CachedContentLoader>().ToSelf().InSingletonScope().WithConstructorArgument("rootDir", "Content"); } } } } NinjectGame.cs /// <summary>Base class for Games making use of Ninject</summary> public class NinjectGame : Game { /// <summary>Initializes a new Ninject game instance</summary> /// <param name="kernel">Kernel the game has been created by</param> public NinjectGame(IKernel kernel) { Type ownType = this.GetType(); if(ownType != typeof(Game)) { kernel.Bind<NinjectGame>().To<MasterEngine>().InSingletonScope(); } kernel.Bind<Game>().To<NinjectGame>().InSingletonScope(); } } } // namespace Nuclex.DependencyInjection.Demo.Scaffolding When I try to get the CachedContentLoader, I get a stack overflow exception. I'm basing this off of this tutorial, but I really have no idea what I'm doing. Help?

    Read the article

  • value types in the vm

    - by john.rose
    value types in the vm p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} p.p2 {margin: 0.0px 0.0px 14.0px 0.0px; font: 14.0px Times} p.p3 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times} p.p4 {margin: 0.0px 0.0px 15.0px 0.0px; font: 14.0px Times} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier; min-height: 17.0px} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p8 {margin: 0.0px 0.0px 0.0px 36.0px; text-indent: -36.0px; font: 14.0px Times; min-height: 18.0px} p.p9 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p10 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; color: #000000} li.li1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} li.li7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} span.s1 {font: 14.0px Courier} span.s2 {color: #000000} span.s3 {font: 14.0px Courier; color: #000000} ol.ol1 {list-style-type: decimal} Or, enduring values for a changing world. Introduction A value type is a data type which, generally speaking, is designed for being passed by value in and out of methods, and stored by value in data structures. The only value types which the Java language directly supports are the eight primitive types. Java indirectly and approximately supports value types, if they are implemented in terms of classes. For example, both Integer and String may be viewed as value types, especially if their usage is restricted to avoid operations appropriate to Object. In this note, we propose a definition of value types in terms of a design pattern for Java classes, accompanied by a set of usage restrictions. We also sketch the relation of such value types to tuple types (which are a JVM-level notion), and point out JVM optimizations that can apply to value types. This note is a thought experiment to extend the JVM’s performance model in support of value types. The demonstration has two phases.  Initially the extension can simply use design patterns, within the current bytecode architecture, and in today’s Java language. But if the performance model is to be realized in practice, it will probably require new JVM bytecode features, changes to the Java language, or both.  We will look at a few possibilities for these new features. An Axiom of Value In the context of the JVM, a value type is a data type equipped with construction, assignment, and equality operations, and a set of typed components, such that, whenever two variables of the value type produce equal corresponding values for their components, the values of the two variables cannot be distinguished by any JVM operation. Here are some corollaries: A value type is immutable, since otherwise a copy could be constructed and the original could be modified in one of its components, allowing the copies to be distinguished. Changing the component of a value type requires construction of a new value. The equals and hashCode operations are strictly component-wise. If a value type is represented by a JVM reference, that reference cannot be successfully synchronized on, and cannot be usefully compared for reference equality. A value type can be viewed in terms of what it doesn’t do. We can say that a value type omits all value-unsafe operations, which could violate the constraints on value types.  These operations, which are ordinarily allowed for Java object types, are pointer equality comparison (the acmp instruction), synchronization (the monitor instructions), all the wait and notify methods of class Object, and non-trivial finalize methods. The clone method is also value-unsafe, although for value types it could be treated as the identity function. Finally, and most importantly, any side effect on an object (however visible) also counts as an value-unsafe operation. A value type may have methods, but such methods must not change the components of the value. It is reasonable and useful to define methods like toString, equals, and hashCode on value types, and also methods which are specifically valuable to users of the value type. Representations of Value Value types have two natural representations in the JVM, unboxed and boxed. An unboxed value consists of the components, as simple variables. For example, the complex number x=(1+2i), in rectangular coordinate form, may be represented in unboxed form by the following pair of variables: /*Complex x = Complex.valueOf(1.0, 2.0):*/ double x_re = 1.0, x_im = 2.0; These variables might be locals, parameters, or fields. Their association as components of a single value is not defined to the JVM. Here is a sample computation which computes the norm of the difference between two complex numbers: double distance(/*Complex x:*/ double x_re, double x_im,         /*Complex y:*/ double y_re, double y_im) {     /*Complex z = x.minus(y):*/     double z_re = x_re - y_re, z_im = x_im - y_im;     /*return z.abs():*/     return Math.sqrt(z_re*z_re + z_im*z_im); } A boxed representation groups component values under a single object reference. The reference is to a ‘wrapper class’ that carries the component values in its fields. (A primitive type can naturally be equated with a trivial value type with just one component of that type. In that view, the wrapper class Integer can serve as a boxed representation of value type int.) The unboxed representation of complex numbers is practical for many uses, but it fails to cover several major use cases: return values, array elements, and generic APIs. The two components of a complex number cannot be directly returned from a Java function, since Java does not support multiple return values. The same story applies to array elements: Java has no ’array of structs’ feature. (Double-length arrays are a possible workaround for complex numbers, but not for value types with heterogeneous components.) By generic APIs I mean both those which use generic types, like Arrays.asList and those which have special case support for primitive types, like String.valueOf and PrintStream.println. Those APIs do not support unboxed values, and offer some problems to boxed values. Any ’real’ JVM type should have a story for returns, arrays, and API interoperability. The basic problem here is that value types fall between primitive types and object types. Value types are clearly more complex than primitive types, and object types are slightly too complicated. Objects are a little bit dangerous to use as value carriers, since object references can be compared for pointer equality, and can be synchronized on. Also, as many Java programmers have observed, there is often a performance cost to using wrapper objects, even on modern JVMs. Even so, wrapper classes are a good starting point for talking about value types. If there were a set of structural rules and restrictions which would prevent value-unsafe operations on value types, wrapper classes would provide a good notation for defining value types. This note attempts to define such rules and restrictions. Let’s Start Coding Now it is time to look at some real code. Here is a definition, written in Java, of a complex number value type. @ValueSafe public final class Complex implements java.io.Serializable {     // immutable component structure:     public final double re, im;     private Complex(double re, double im) {         this.re = re; this.im = im;     }     // interoperability methods:     public String toString() { return "Complex("+re+","+im+")"; }     public List<Double> asList() { return Arrays.asList(re, im); }     public boolean equals(Complex c) {         return re == c.re && im == c.im;     }     public boolean equals(@ValueSafe Object x) {         return x instanceof Complex && equals((Complex) x);     }     public int hashCode() {         return 31*Double.valueOf(re).hashCode()                 + Double.valueOf(im).hashCode();     }     // factory methods:     public static Complex valueOf(double re, double im) {         return new Complex(re, im);     }     public Complex changeRe(double re2) { return valueOf(re2, im); }     public Complex changeIm(double im2) { return valueOf(re, im2); }     public static Complex cast(@ValueSafe Object x) {         return x == null ? ZERO : (Complex) x;     }     // utility methods and constants:     public Complex plus(Complex c)  { return new Complex(re+c.re, im+c.im); }     public Complex minus(Complex c) { return new Complex(re-c.re, im-c.im); }     public double abs() { return Math.sqrt(re*re + im*im); }     public static final Complex PI = valueOf(Math.PI, 0.0);     public static final Complex ZERO = valueOf(0.0, 0.0); } This is not a minimal definition, because it includes some utility methods and other optional parts.  The essential elements are as follows: The class is marked as a value type with an annotation. The class is final, because it does not make sense to create subclasses of value types. The fields of the class are all non-private and final.  (I.e., the type is immutable and structurally transparent.) From the supertype Object, all public non-final methods are overridden. The constructor is private. Beyond these bare essentials, we can observe the following features in this example, which are likely to be typical of all value types: One or more factory methods are responsible for value creation, including a component-wise valueOf method. There are utility methods for complex arithmetic and instance creation, such as plus and changeIm. There are static utility constants, such as PI. The type is serializable, using the default mechanisms. There are methods for converting to and from dynamically typed references, such as asList and cast. The Rules In order to use value types properly, the programmer must avoid value-unsafe operations.  A helpful Java compiler should issue errors (or at least warnings) for code which provably applies value-unsafe operations, and should issue warnings for code which might be correct but does not provably avoid value-unsafe operations.  No such compilers exist today, but to simplify our account here, we will pretend that they do exist. A value-safe type is any class, interface, or type parameter marked with the @ValueSafe annotation, or any subtype of a value-safe type.  If a value-safe class is marked final, it is in fact a value type.  All other value-safe classes must be abstract.  The non-static fields of a value class must be non-public and final, and all its constructors must be private. Under the above rules, a standard interface could be helpful to define value types like Complex.  Here is an example: @ValueSafe public interface ValueType extends java.io.Serializable {     // All methods listed here must get redefined.     // Definitions must be value-safe, which means     // they may depend on component values only.     List<? extends Object> asList();     int hashCode();     boolean equals(@ValueSafe Object c);     String toString(); } //@ValueSafe inherited from supertype: public final class Complex implements ValueType { … The main advantage of such a conventional interface is that (unlike an annotation) it is reified in the runtime type system.  It could appear as an element type or parameter bound, for facilities which are designed to work on value types only.  More broadly, it might assist the JVM to perform dynamic enforcement of the rules for value types. Besides types, the annotation @ValueSafe can mark fields, parameters, local variables, and methods.  (This is redundant when the type is also value-safe, but may be useful when the type is Object or another supertype of a value type.)  Working forward from these annotations, an expression E is defined as value-safe if it satisfies one or more of the following: The type of E is a value-safe type. E names a field, parameter, or local variable whose declaration is marked @ValueSafe. E is a call to a method whose declaration is marked @ValueSafe. E is an assignment to a value-safe variable, field reference, or array reference. E is a cast to a value-safe type from a value-safe expression. E is a conditional expression E0 ? E1 : E2, and both E1 and E2 are value-safe. Assignments to value-safe expressions and initializations of value-safe names must take their values from value-safe expressions. A value-safe expression may not be the subject of a value-unsafe operation.  In particular, it cannot be synchronized on, nor can it be compared with the “==” operator, not even with a null or with another value-safe type. In a program where all of these rules are followed, no value-type value will be subject to a value-unsafe operation.  Thus, the prime axiom of value types will be satisfied, that no two value type will be distinguishable as long as their component values are equal. More Code To illustrate these rules, here are some usage examples for Complex: Complex pi = Complex.valueOf(Math.PI, 0); Complex zero = pi.changeRe(0);  //zero = pi; zero.re = 0; ValueType vtype = pi; @SuppressWarnings("value-unsafe")   Object obj = pi; @ValueSafe Object obj2 = pi; obj2 = new Object();  // ok List<Complex> clist = new ArrayList<Complex>(); clist.add(pi);  // (ok assuming List.add param is @ValueSafe) List<ValueType> vlist = new ArrayList<ValueType>(); vlist.add(pi);  // (ok) List<Object> olist = new ArrayList<Object>(); olist.add(pi);  // warning: "value-unsafe" boolean z = pi.equals(zero); boolean z1 = (pi == zero);  // error: reference comparison on value type boolean z2 = (pi == null);  // error: reference comparison on value type boolean z3 = (pi == obj2);  // error: reference comparison on value type synchronized (pi) { }  // error: synch of value, unpredictable result synchronized (obj2) { }  // unpredictable result Complex qq = pi; qq = null;  // possible NPE; warning: “null-unsafe" qq = (Complex) obj;  // warning: “null-unsafe" qq = Complex.cast(obj);  // OK @SuppressWarnings("null-unsafe")   Complex empty = null;  // possible NPE qq = empty;  // possible NPE (null pollution) The Payoffs It follows from this that either the JVM or the java compiler can replace boxed value-type values with unboxed ones, without affecting normal computations.  Fields and variables of value types can be split into their unboxed components.  Non-static methods on value types can be transformed into static methods which take the components as value parameters. Some common questions arise around this point in any discussion of value types. Why burden the programmer with all these extra rules?  Why not detect programs automagically and perform unboxing transparently?  The answer is that it is easy to break the rules accidently unless they are agreed to by the programmer and enforced.  Automatic unboxing optimizations are tantalizing but (so far) unreachable ideal.  In the current state of the art, it is possible exhibit benchmarks in which automatic unboxing provides the desired effects, but it is not possible to provide a JVM with a performance model that assures the programmer when unboxing will occur.  This is why I’m writing this note, to enlist help from, and provide assurances to, the programmer.  Basically, I’m shooting for a good set of user-supplied “pragmas” to frame the desired optimization. Again, the important thing is that the unboxing must be done reliably, or else programmers will have no reason to work with the extra complexity of the value-safety rules.  There must be a reasonably stable performance model, wherein using a value type has approximately the same performance characteristics as writing the unboxed components as separate Java variables. There are some rough corners to the present scheme.  Since Java fields and array elements are initialized to null, value-type computations which incorporate uninitialized variables can produce null pointer exceptions.  One workaround for this is to require such variables to be null-tested, and the result replaced with a suitable all-zero value of the value type.  That is what the “cast” method does above. Generically typed APIs like List<T> will continue to manipulate boxed values always, at least until we figure out how to do reification of generic type instances.  Use of such APIs will elicit warnings until their type parameters (and/or relevant members) are annotated or typed as value-safe.  Retrofitting List<T> is likely to expose flaws in the present scheme, which we will need to engineer around.  Here are a couple of first approaches: public interface java.util.List<@ValueSafe T> extends Collection<T> { … public interface java.util.List<T extends Object|ValueType> extends Collection<T> { … (The second approach would require disjunctive types, in which value-safety is “contagious” from the constituent types.) With more transformations, the return value types of methods can also be unboxed.  This may require significant bytecode-level transformations, and would work best in the presence of a bytecode representation for multiple value groups, which I have proposed elsewhere under the title “Tuples in the VM”. But for starters, the JVM can apply this transformation under the covers, to internally compiled methods.  This would give a way to express multiple return values and structured return values, which is a significant pain-point for Java programmers, especially those who work with low-level structure types favored by modern vector and graphics processors.  The lack of multiple return values has a strong distorting effect on many Java APIs. Even if the JVM fails to unbox a value, there is still potential benefit to the value type.  Clustered computing systems something have copy operations (serialization or something similar) which apply implicitly to command operands.  When copying JVM objects, it is extremely helpful to know when an object’s identity is important or not.  If an object reference is a copied operand, the system may have to create a proxy handle which points back to the original object, so that side effects are visible.  Proxies must be managed carefully, and this can be expensive.  On the other hand, value types are exactly those types which a JVM can “copy and forget” with no downside. Array types are crucial to bulk data interfaces.  (As data sizes and rates increase, bulk data becomes more important than scalar data, so arrays are definitely accompanying us into the future of computing.)  Value types are very helpful for adding structure to bulk data, so a successful value type mechanism will make it easier for us to express richer forms of bulk data. Unboxing arrays (i.e., arrays containing unboxed values) will provide better cache and memory density, and more direct data movement within clustered or heterogeneous computing systems.  They require the deepest transformations, relative to today’s JVM.  There is an impedance mismatch between value-type arrays and Java’s covariant array typing, so compromises will need to be struck with existing Java semantics.  It is probably worth the effort, since arrays of unboxed value types are inherently more memory-efficient than standard Java arrays, which rely on dependent pointer chains. It may be sufficient to extend the “value-safe” concept to array declarations, and allow low-level transformations to change value-safe array declarations from the standard boxed form into an unboxed tuple-based form.  Such value-safe arrays would not be convertible to Object[] arrays.  Certain connection points, such as Arrays.copyOf and System.arraycopy might need additional input/output combinations, to allow smooth conversion between arrays with boxed and unboxed elements. Alternatively, the correct solution may have to wait until we have enough reification of generic types, and enough operator overloading, to enable an overhaul of Java arrays. Implicit Method Definitions The example of class Complex above may be unattractively complex.  I believe most or all of the elements of the example class are required by the logic of value types. If this is true, a programmer who writes a value type will have to write lots of error-prone boilerplate code.  On the other hand, I think nearly all of the code (except for the domain-specific parts like plus and minus) can be implicitly generated. Java has a rule for implicitly defining a class’s constructor, if no it defines no constructors explicitly.  Likewise, there are rules for providing default access modifiers for interface members.  Because of the highly regular structure of value types, it might be reasonable to perform similar implicit transformations on value types.  Here’s an example of a “highly implicit” definition of a complex number type: public class Complex implements ValueType {  // implicitly final     public double re, im;  // implicitly public final     //implicit methods are defined elementwise from te fields:     //  toString, asList, equals(2), hashCode, valueOf, cast     //optionally, explicit methods (plus, abs, etc.) would go here } In other words, with the right defaults, a simple value type definition can be a one-liner.  The observant reader will have noticed the similarities (and suitable differences) between the explicit methods above and the corresponding methods for List<T>. Another way to abbreviate such a class would be to make an annotation the primary trigger of the functionality, and to add the interface(s) implicitly: public @ValueType class Complex { … // implicitly final, implements ValueType (But to me it seems better to communicate the “magic” via an interface, even if it is rooted in an annotation.) Implicitly Defined Value Types So far we have been working with nominal value types, which is to say that the sequence of typed components is associated with a name and additional methods that convey the intention of the programmer.  A simple ordered pair of floating point numbers can be variously interpreted as (to name a few possibilities) a rectangular or polar complex number or Cartesian point.  The name and the methods convey the intended meaning. But what if we need a truly simple ordered pair of floating point numbers, without any further conceptual baggage?  Perhaps we are writing a method (like “divideAndRemainder”) which naturally returns a pair of numbers instead of a single number.  Wrapping the pair of numbers in a nominal type (like “QuotientAndRemainder”) makes as little sense as wrapping a single return value in a nominal type (like “Quotient”).  What we need here are structural value types commonly known as tuples. For the present discussion, let us assign a conventional, JVM-friendly name to tuples, roughly as follows: public class java.lang.tuple.$DD extends java.lang.tuple.Tuple {      double $1, $2; } Here the component names are fixed and all the required methods are defined implicitly.  The supertype is an abstract class which has suitable shared declarations.  The name itself mentions a JVM-style method parameter descriptor, which may be “cracked” to determine the number and types of the component fields. The odd thing about such a tuple type (and structural types in general) is it must be instantiated lazily, in response to linkage requests from one or more classes that need it.  The JVM and/or its class loaders must be prepared to spin a tuple type on demand, given a simple name reference, $xyz, where the xyz is cracked into a series of component types.  (Specifics of naming and name mangling need some tasteful engineering.) Tuples also seem to demand, even more than nominal types, some support from the language.  (This is probably because notations for non-nominal types work best as combinations of punctuation and type names, rather than named constructors like Function3 or Tuple2.)  At a minimum, languages with tuples usually (I think) have some sort of simple bracket notation for creating tuples, and a corresponding pattern-matching syntax (or “destructuring bind”) for taking tuples apart, at least when they are parameter lists.  Designing such a syntax is no simple thing, because it ought to play well with nominal value types, and also with pre-existing Java features, such as method parameter lists, implicit conversions, generic types, and reflection.  That is a task for another day. Other Use Cases Besides complex numbers and simple tuples there are many use cases for value types.  Many tuple-like types have natural value-type representations. These include rational numbers, point locations and pixel colors, and various kinds of dates and addresses. Other types have a variable-length ‘tail’ of internal values. The most common example of this is String, which is (mathematically) a sequence of UTF-16 character values. Similarly, bit vectors, multiple-precision numbers, and polynomials are composed of sequences of values. Such types include, in their representation, a reference to a variable-sized data structure (often an array) which (somehow) represents the sequence of values. The value type may also include ’header’ information. Variable-sized values often have a length distribution which favors short lengths. In that case, the design of the value type can make the first few values in the sequence be direct ’header’ fields of the value type. In the common case where the header is enough to represent the whole value, the tail can be a shared null value, or even just a null reference. Note that the tail need not be an immutable object, as long as the header type encapsulates it well enough. This is the case with String, where the tail is a mutable (but never mutated) character array. Field types and their order must be a globally visible part of the API.  The structure of the value type must be transparent enough to have a globally consistent unboxed representation, so that all callers and callees agree about the type and order of components  that appear as parameters, return types, and array elements.  This is a trade-off between efficiency and encapsulation, which is forced on us when we remove an indirection enjoyed by boxed representations.  A JVM-only transformation would not care about such visibility, but a bytecode transformation would need to take care that (say) the components of complex numbers would not get swapped after a redefinition of Complex and a partial recompile.  Perhaps constant pool references to value types need to declare the field order as assumed by each API user. This brings up the delicate status of private fields in a value type.  It must always be possible to load, store, and copy value types as coordinated groups, and the JVM performs those movements by moving individual scalar values between locals and stack.  If a component field is not public, what is to prevent hostile code from plucking it out of the tuple using a rogue aload or astore instruction?  Nothing but the verifier, so we may need to give it more smarts, so that it treats value types as inseparable groups of stack slots or locals (something like long or double). My initial thought was to make the fields always public, which would make the security problem moot.  But public is not always the right answer; consider the case of String, where the underlying mutable character array must be encapsulated to prevent security holes.  I believe we can win back both sides of the tradeoff, by training the verifier never to split up the components in an unboxed value.  Just as the verifier encapsulates the two halves of a 64-bit primitive, it can encapsulate the the header and body of an unboxed String, so that no code other than that of class String itself can take apart the values. Similar to String, we could build an efficient multi-precision decimal type along these lines: public final class DecimalValue extends ValueType {     protected final long header;     protected private final BigInteger digits;     public DecimalValue valueOf(int value, int scale) {         assert(scale >= 0);         return new DecimalValue(((long)value << 32) + scale, null);     }     public DecimalValue valueOf(long value, int scale) {         if (value == (int) value)             return valueOf((int)value, scale);         return new DecimalValue(-scale, new BigInteger(value));     } } Values of this type would be passed between methods as two machine words. Small values (those with a significand which fits into 32 bits) would be represented without any heap data at all, unless the DecimalValue itself were boxed. (Note the tension between encapsulation and unboxing in this case.  It would be better if the header and digits fields were private, but depending on where the unboxing information must “leak”, it is probably safer to make a public revelation of the internal structure.) Note that, although an array of Complex can be faked with a double-length array of double, there is no easy way to fake an array of unboxed DecimalValues.  (Either an array of boxed values or a transposed pair of homogeneous arrays would be reasonable fallbacks, in a current JVM.)  Getting the full benefit of unboxing and arrays will require some new JVM magic. Although the JVM emphasizes portability, system dependent code will benefit from using machine-level types larger than 64 bits.  For example, the back end of a linear algebra package might benefit from value types like Float4 which map to stock vector types.  This is probably only worthwhile if the unboxing arrays can be packed with such values. More Daydreams A more finely-divided design for dynamic enforcement of value safety could feature separate marker interfaces for each invariant.  An empty marker interface Unsynchronizable could cause suitable exceptions for monitor instructions on objects in marked classes.  More radically, a Interchangeable marker interface could cause JVM primitives that are sensitive to object identity to raise exceptions; the strangest result would be that the acmp instruction would have to be specified as raising an exception. @ValueSafe public interface ValueType extends java.io.Serializable,         Unsynchronizable, Interchangeable { … public class Complex implements ValueType {     // inherits Serializable, Unsynchronizable, Interchangeable, @ValueSafe     … It seems possible that Integer and the other wrapper types could be retro-fitted as value-safe types.  This is a major change, since wrapper objects would be unsynchronizable and their references interchangeable.  It is likely that code which violates value-safety for wrapper types exists but is uncommon.  It is less plausible to retro-fit String, since the prominent operation String.intern is often used with value-unsafe code. We should also reconsider the distinction between boxed and unboxed values in code.  The design presented above obscures that distinction.  As another thought experiment, we could imagine making a first class distinction in the type system between boxed and unboxed representations.  Since only primitive types are named with a lower-case initial letter, we could define that the capitalized version of a value type name always refers to the boxed representation, while the initial lower-case variant always refers to boxed.  For example: complex pi = complex.valueOf(Math.PI, 0); Complex boxPi = pi;  // convert to boxed myList.add(boxPi); complex z = myList.get(0);  // unbox Such a convention could perhaps absorb the current difference between int and Integer, double and Double. It might also allow the programmer to express a helpful distinction among array types. As said above, array types are crucial to bulk data interfaces, but are limited in the JVM.  Extending arrays beyond the present limitations is worth thinking about; for example, the Maxine JVM implementation has a hybrid object/array type.  Something like this which can also accommodate value type components seems worthwhile.  On the other hand, does it make sense for value types to contain short arrays?  And why should random-access arrays be the end of our design process, when bulk data is often sequentially accessed, and it might make sense to have heterogeneous streams of data as the natural “jumbo” data structure.  These considerations must wait for another day and another note. More Work It seems to me that a good sequence for introducing such value types would be as follows: Add the value-safety restrictions to an experimental version of javac. Code some sample applications with value types, including Complex and DecimalValue. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. A staggered roll-out like this would decouple language changes from bytecode changes, which is always a convenient thing. A similar investigation should be applied (concurrently) to array types.  In this case, it seems to me that the starting point is in the JVM: Add an experimental unboxing array data structure to a production JVM, perhaps along the lines of Maxine hybrids.  No bytecode or language support is required at first; everything can be done with encapsulated unsafe operations and/or method handles. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. That’s enough musing me for now.  Back to work!

    Read the article

  • Problem with JOGL and Framebuffer Render-to-texture: Invalid Framebuffer Operation Error

    - by quadelirus
    Okay, so I am trying to render a scene to a small 32x32 texture and ran into problems. I get an "invalid framebuffer operation" error when I try to actually draw anything to the texture. I have simplified the code below so that it simply tries to render a quad to a texture and then bind that quad as a texture for another quad that is rendered to the screen. So my question is this... where is the error? This is using JOGL 1.1.1. The error occurs at Checkpoint2 in the code. import java.awt.event.*; import javax.media.opengl.*; import javax.media.opengl.glu.*; import javax.swing.JFrame; import java.nio.*; public class Main extends JFrame implements GLEventListener, KeyListener, MouseListener, MouseMotionListener, ActionListener{ /* GL related variables */ private final GLCanvas canvas; private GL gl; private GLU glu; private int winW = 600, winH = 600; private int texRender_FBO; private int texRender_RB; private int texRender_32x32; public static void main(String args[]) { new Main(); } /* creates OpenGL window */ public Main() { super("Problem Child"); canvas = new GLCanvas(); canvas.addGLEventListener(this); canvas.addKeyListener(this); canvas.addMouseListener(this); canvas.addMouseMotionListener(this); getContentPane().add(canvas); setSize(winW, winH); setLocationRelativeTo(null); setDefaultCloseOperation(EXIT_ON_CLOSE); setVisible(true); canvas.requestFocus(); } /* gl display function */ public void display(GLAutoDrawable drawable) { gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, this.texRender_FBO); gl.glPushAttrib(GL.GL_VIEWPORT_BIT); gl.glViewport(0, 0, 32, 32); gl.glClearColor(1.f, 0.f, 0.f, 1.f); System.out.print("Checkpoint1: "); outputError(); gl.glBegin(GL.GL_QUADS); { //gl.glTexCoord2f(0.0f, 0.0f); gl.glColor3f(1.f, 0.f, 0.f); gl.glVertex3f(0.0f, 1.0f, 1.0f); //gl.glTexCoord2f(1.0f, 0.0f); gl.glColor3f(1.f, 1.f, 0.f); gl.glVertex3f(1.0f, 1.0f, 1.0f); //gl.glTexCoord2f(1.0f, 1.0f); gl.glColor3f(1.f, 1.f, 1.f); gl.glVertex3f(1.0f, 0.0f, 1.0f); //gl.glTexCoord2f(0.0f, 1.0f); gl.glColor3f(1.f, 0.f, 1.f); gl.glVertex3f(0.0f, 0.0f, 1.0f); } gl.glEnd(); System.out.print("Checkpoint2: "); outputError(); //Here I get an invalid framebuffer operation gl.glPopAttrib(); gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, 0); gl.glClearColor(0.f, 0.f, 0.f, 1.f); gl.glClear(GL.GL_COLOR_BUFFER_BIT); gl.glColor3f(1.f, 1.f, 1.f); gl.glBindTexture(GL.GL_TEXTURE_1D, this.texRender_32x32); gl.glBegin(GL.GL_QUADS); { gl.glTexCoord2f(0.0f, 0.0f); //gl.glColor3f(1.f, 0.f, 0.f); gl.glVertex3f(0.0f, 1.0f, 1.0f); gl.glTexCoord2f(1.0f, 0.0f); //gl.glColor3f(1.f, 1.f, 0.f); gl.glVertex3f(1.0f, 1.0f, 1.0f); gl.glTexCoord2f(1.0f, 1.0f); //gl.glColor3f(1.f, 1.f, 1.f); gl.glVertex3f(1.0f, 0.0f, 1.0f); gl.glTexCoord2f(0.0f, 1.0f); //gl.glColor3f(1.f, 0.f, 1.f); gl.glVertex3f(0.0f, 0.0f, 1.0f); } gl.glEnd(); } /* initialize GL */ public void init(GLAutoDrawable drawable) { gl = drawable.getGL(); glu = new GLU(); gl.glClearColor(.3f, .3f, .3f, 1f); gl.glClearDepth(1.0f); gl.glMatrixMode(GL.GL_PROJECTION); gl.glLoadIdentity(); gl.glOrtho(0, 1, 0, 1, -10, 10); gl.glMatrixMode(GL.GL_MODELVIEW); //Set up the 32x32 texture this.texRender_FBO = genFBO(gl); gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, this.texRender_FBO); this.texRender_32x32 = genTexture(gl); gl.glBindTexture(GL.GL_TEXTURE_2D, this.texRender_32x32); gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGB_FLOAT32_ATI, 32, 32, 0, GL.GL_RGB, GL.GL_FLOAT, null); gl.glFramebufferTexture2DEXT(GL.GL_FRAMEBUFFER_EXT, GL.GL_COLOR_ATTACHMENT0_EXT, GL.GL_TEXTURE_2D, this.texRender_32x32, 0); //gl.glDrawBuffer(GL.GL_COLOR_ATTACHMENT0_EXT); this.texRender_RB = genRB(gl); gl.glBindRenderbufferEXT(GL.GL_RENDERBUFFER_EXT, this.texRender_RB); gl.glRenderbufferStorageEXT(GL.GL_RENDERBUFFER_EXT, GL.GL_DEPTH_COMPONENT24, 32, 32); gl.glFramebufferRenderbufferEXT(GL.GL_FRAMEBUFFER_EXT, GL.GL_DEPTH_ATTACHMENT_EXT, GL.GL_RENDERBUFFER_EXT, this.texRender_RB); gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, 0); gl.glBindRenderbufferEXT(GL.GL_RENDERBUFFER_EXT, 0); outputError(); } private void outputError() { int c; if ((c = gl.glGetError()) != GL.GL_NO_ERROR) System.out.println(glu.gluErrorString(c)); } private int genRB(GL gl) { int[] array = new int[1]; IntBuffer ib = IntBuffer.wrap(array); gl.glGenRenderbuffersEXT(1, ib); return ib.get(0); } private int genFBO(GL gl) { int[] array = new int[1]; IntBuffer ib = IntBuffer.wrap(array); gl.glGenFramebuffersEXT(1, ib); return ib.get(0); } private int genTexture(GL gl) { final int[] tmp = new int[1]; gl.glGenTextures(1, tmp, 0); return tmp[0]; } /* mouse and keyboard callback functions */ public void reshape(GLAutoDrawable drawable, int x, int y, int width, int height) { winW = width; winH = height; gl.glViewport(0, 0, width, height); } //Sorry about these, I just had to delete massive amounts of code to boil this thing down and these are hangers-on public void mousePressed(MouseEvent e) {} public void mouseDragged(MouseEvent e) {} public void mouseReleased(MouseEvent e) {} public void keyPressed(KeyEvent e) {} public void displayChanged(GLAutoDrawable drawable, boolean modeChanged, boolean deviceChanged) { } public void keyTyped(KeyEvent e) { } public void keyReleased(KeyEvent e) { } public void mouseMoved(MouseEvent e) { } public void actionPerformed(ActionEvent e) { } public void mouseClicked(MouseEvent e) { } public void mouseEntered(MouseEvent e) { } public void mouseExited(MouseEvent e) { } }

    Read the article

  • embedded glassfish: java.lang.NoClassDefFoundError: java/util/ServiceLoader

    - by Xinus
    I am trying to embed glassfish inside my java program using embeded api, I am using maven2 and its pom.xml is as follows <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>orh.highmark</groupId> <artifactId>glassfish-test1</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <name>glassfish-test1</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.glassfish.extras</groupId> <artifactId>glassfish-embedded-all</artifactId> <version>3.1-SNAPSHOT</version> </dependency> </dependencies> <repositories> <repository> <id>maven2-repository.dev.java.net</id> <name>Java.net Repository for Maven</name> <url>http://download.java.net/maven/2/</url> <layout>default</layout> </repository> <repository> <id>glassfish-repository</id> <name>GlassFish Nexus Repository</name> <url>http://maven.glassfish.org/content/groups/glassfish</url> </repository> </repositories> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1</version> <executions> <execution> <goals> <goal>java</goal> </goals> </execution> </executions> <configuration> <mainClass>orh.highmark.App</mainClass> <arguments> <argument>argument1</argument> </arguments> </configuration> </plugin> </plugins> </build> </project> Program: public class App { public static void main( String[] args ) { System.out.println( "Hello World!" ); Server.Builder builder = new Server.Builder("test"); builder.logger(true); Server server = builder.build(); } } But for some reason it always gives me error as java.lang.NoClassDefFoundError: java/util/ServiceLoader here is the output C:\Users\sunils\glassfish-tests\glassfish-test1>mvn -e exec:java + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building glassfish-test1 [INFO] task-segment: [exec:java] [INFO] ------------------------------------------------------------------------ [INFO] Preparing exec:java [INFO] No goals needed for project - skipping [INFO] [exec:java {execution: default-cli}] Hello World! [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] An exception occured while executing the Java class. null java/util/ServiceLoader [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: An exception occured whi le executing the Java class. null at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:719) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: An exception occured while executing the Java class. null at org.codehaus.mojo.exec.ExecJavaMojo.execute(ExecJavaMojo.java:345) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) ... 17 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:290) at java.lang.Thread.run(Thread.java:595) Caused by: java.lang.NoClassDefFoundError: java/util/ServiceLoader at org.glassfish.api.embedded.Server.getMain(Server.java:701) at org.glassfish.api.embedded.Server.<init>(Server.java:290) at org.glassfish.api.embedded.Server.<init>(Server.java:75) at org.glassfish.api.embedded.Server$Builder.build(Server.java:185) at org.glassfish.api.embedded.Server$Builder.build(Server.java:167) at orh.highmark.App.main(App.java:14) ... 6 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Sat May 08 02:55:03 IST 2010 [INFO] Final Memory: 3M/6M [INFO] ------------------------------------------------------------------------ I couldn't guess its problem with my program or with the glassfish api. Can somebody please help me understand what is happening here and how to rectify it? thanks for any clue ..

    Read the article

  • Implementing an async "read all currently available data from stream" operation

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a console's standard output Stream. Console output streams are of type FileStream; the implementation can cast to that, if needed. There is also an associated StreamReader already present to leverage. There is only one thing I need to implement in this class to achieve my desired functionality: an async "read all the data available this moment" operation. Reading to the end of the stream is not viable because the stream will not end unless the process closes the console output handle, and it will not do that because it is interactive and expecting input before continuing. I will be using that hypothetical async operation to implement event-based notification, which will be more convenient for my callers. The public interface of the class is this: public class ConsoleAutomator { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Remember that the goal here is to read all of the chunk and call event subscribers exactly once for each chunk. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream. private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer (in which case we know that there was no more data to be read during the last read operation), all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Never more than one event for each time data is available to be read Is almost agnostic to the buffer size The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this. Update: I definitely did not communicate the scenario well in my initial writeup. I have since revised the writeup quite a bit, but to be extra sure: The question is about how to implement an async "read all the data available this moment" operation. My apologies to the people who took the time to read and answer without me making my intent clear enough.

    Read the article

  • With a jquery modular dialog how do I stop the form values from persisting?

    - by stormist
    (Citing source at: http://jqueryui.com/demos/dialog/#modal-form) As an example, this works great but each time the form is subsequently opened the user entered values remain. How can I stop this behavior? (the form will be used multiple times on the same page. <style type="text/css"> body { font-size: 62.5%; } label, input { display:block; } input.text { margin-bottom:12px; width:95%; padding: .4em; } fieldset { padding:0; border:0; margin-top:25px; } h1 { font-size: 1.2em; margin: .6em 0; } div#users-contain { width: 350px; margin: 20px 0; } div#users-contain table { margin: 1em 0; border-collapse: collapse; width: 100%; } div#users-contain table td, div#users-contain table th { border: 1px solid #eee; padding: .6em 10px; text-align: left; } .ui-dialog .ui-state-error { padding: .3em; } .validateTips { border: 1px solid transparent; padding: 0.3em; } </style> <script type="text/javascript"> $(function() { // a workaround for a flaw in the demo system (http://dev.jqueryui.com/ticket/4375), ignore! $("#dialog").dialog("destroy"); var name = $("#name"), email = $("#email"), password = $("#password"), allFields = $([]).add(name).add(email).add(password), tips = $(".validateTips"); function updateTips(t) { tips .text(t) .addClass('ui-state-highlight'); setTimeout(function() { tips.removeClass('ui-state-highlight', 1500); }, 500); } function checkLength(o,n,min,max) { if ( o.val().length > max || o.val().length < min ) { o.addClass('ui-state-error'); updateTips("Length of " + n + " must be between "+min+" and "+max+"."); return false; } else { return true; } } function checkRegexp(o,regexp,n) { if ( !( regexp.test( o.val() ) ) ) { o.addClass('ui-state-error'); updateTips(n); return false; } else { return true; } } $("#dialog-form").dialog({ autoOpen: false, height: 300, width: 350, modal: true, buttons: { 'Create an account': function() { var bValid = true; allFields.removeClass('ui-state-error'); bValid = bValid && checkLength(name,"username",3,16); bValid = bValid && checkLength(email,"email",6,80); bValid = bValid && checkLength(password,"password",5,16); bValid = bValid && checkRegexp(name,/^[a-z]([0-9a-z_])+$/i,"Username may consist of a-z, 0-9, underscores, begin with a letter."); // From jquery.validate.js (by joern), contributed by Scott Gonzalez: http://projects.scottsplayground.com/email_address_validation/ bValid = bValid && checkRegexp(email,/^((([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+(\.([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+)*)|((\x22)((((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(([\x01-\x08\x0b\x0c\x0e-\x1f\x7f]|\x21|[\x23-\x5b]|[\x5d-\x7e]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(\\([\x01-\x09\x0b\x0c\x0d-\x7f]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]))))*(((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(\x22)))@((([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.)+(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.?$/i,"eg. [email protected]"); bValid = bValid && checkRegexp(password,/^([0-9a-zA-Z])+$/,"Password field only allow : a-z 0-9"); if (bValid) { $('#users tbody').append('<tr>' + '<td>' + name.val() + '</td>' + '<td>' + email.val() + '</td>' + '<td>' + password.val() + '</td>' + '</tr>'); $(this).dialog('close'); } }, Cancel: function() { $(this).dialog('close'); } }, close: function() { allFields.val('').removeClass('ui-state-error'); } }); $('#create-user') .button() .click(function() { $('#dialog-form').dialog('open'); }); }); </script> <div class="demo"> <div id="dialog-form" title="Create new user"> <p class="validateTips">All form fields are required.</p> <form> <fieldset> <label for="name">Name</label> <input type="text" name="name" id="name" class="text ui-widget-content ui-corner-all" /> <label for="email">Email</label> <input type="text" name="email" id="email" value="" class="text ui-widget-content ui-corner-all" /> <label for="password">Password</label> <input type="password" name="password" id="password" value="" class="text ui-widget-content ui-corner-all" /> </fieldset> </form> </div> <div id="users-contain" class="ui-widget"> <h1>Existing Users:</h1> <table id="users" class="ui-widget ui-widget-content"> <thead> <tr class="ui-widget-header "> <th>Name</th> <th>Email</th> <th>Password</th> </tr> </thead> <tbody> <tr> <td>John Doe</td> <td>[email protected]</td> <td>johndoe1</td> </tr> </tbody> </table> </div> <button id="create-user">Create new user</button> </div><!-- End demo --> <div class="demo-description"> <p>Use a modal dialog to require that the user enter data during a multi-step process. Embed form markup in the content area, set the <code>modal</code> option to true, and specify primary and secondary user actions with the <code>buttons</code> option.</p> </div><!-- End demo-description -->

    Read the article

  • How to capture live camera frames in RGB with DirectShow

    - by Jonny Boy
    I'm implementing live video capture through DirectShow for live processing and display. (Augmented Reality app). I can access the pixels easily enough, but it seems I can't get the SampleGrabber to provide RGB data. The device (an iSight -- running VC++ Express in VMWare) only reports MEDIASUBTYPE_YUY2. After extensive Googling, I still can't figure out whether DirectShow is supposed to provide built-in color space conversion for this sort of thing. Some sites report that there is no YUV<-RGB conversion built in, others report that you just have to call SetMediaType on your ISampleGrabber with an RGB subtype. Any advice is greatly appreciated, I'm going nuts on this one. Code provided below. Please note that The code works, except that it doesn't provide RGB data I'm aware that I can implement my own conversion filter, but this is not feasible because I'd have to anticipate every possible device format, and this is a relatively small project // Playback IGraphBuilder *pGraphBuilder = NULL; ICaptureGraphBuilder2 *pCaptureGraphBuilder2 = NULL; IMediaControl *pMediaControl = NULL; IBaseFilter *pDeviceFilter = NULL; IAMStreamConfig *pStreamConfig = NULL; BYTE *videoCaps = NULL; AM_MEDIA_TYPE **mediaTypeArray = NULL; // Device selection ICreateDevEnum *pCreateDevEnum = NULL; IEnumMoniker *pEnumMoniker = NULL; IMoniker *pMoniker = NULL; ULONG nFetched = 0; HRESULT hr = CoInitializeEx(NULL, COINIT_MULTITHREADED); // Create CreateDevEnum to list device hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, (PVOID *)&pCreateDevEnum); if (FAILED(hr)) goto ReleaseDataAndFail; // Create EnumMoniker to list devices hr = pCreateDevEnum->CreateClassEnumerator(CLSID_VideoInputDeviceCategory, &pEnumMoniker, 0); if (FAILED(hr)) goto ReleaseDataAndFail; pEnumMoniker->Reset(); // Find desired device while (pEnumMoniker->Next(1, &pMoniker, &nFetched) == S_OK) { IPropertyBag *pPropertyBag; TCHAR devname[256]; // bind to IPropertyBag hr = pMoniker-&gt;BindToStorage(0, 0, IID_IPropertyBag, (void **)&amp;pPropertyBag); if (FAILED(hr)) { pMoniker-&gt;Release(); continue; } VARIANT varName; VariantInit(&amp;varName); HRESULT hr = pPropertyBag-&gt;Read(L"DevicePath", &amp;varName, 0); if (FAILED(hr)) { pMoniker-&gt;Release(); pPropertyBag-&gt;Release(); continue; } char devicePath[DeviceInfo::STRING_LENGTH_MAX] = ""; wcstombs(devicePath, varName.bstrVal, DeviceInfo::STRING_LENGTH_MAX); if (strcmp(devicePath, deviceId) == 0) { // Bind Moniker to Filter pMoniker-&gt;BindToObject(0, 0, IID_IBaseFilter, (void**)&amp;pDeviceFilter); break; } pMoniker-&gt;Release(); pPropertyBag-&gt;Release(); } if (pDeviceFilter == NULL) goto ReleaseDataAndFail; // Create sample grabber IBaseFilter *pGrabberF = NULL; hr = CoCreateInstance(CLSID_SampleGrabber, NULL, CLSCTX_INPROC_SERVER, IID_IBaseFilter, (void**)&pGrabberF); if (FAILED(hr)) goto ReleaseDataAndFail; hr = pGrabberF->QueryInterface(IID_ISampleGrabber, (void**)&pGrabber); if (FAILED(hr)) goto ReleaseDataAndFail; // Create FilterGraph hr = CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC, IID_IGraphBuilder, (LPVOID *)&pGraphBuilder); if (FAILED(hr)) goto ReleaseDataAndFail; // create CaptureGraphBuilder2 hr = CoCreateInstance(CLSID_CaptureGraphBuilder2, NULL, CLSCTX_INPROC, IID_ICaptureGraphBuilder2, (LPVOID *)&pCaptureGraphBuilder2); if (FAILED(hr)) goto ReleaseDataAndFail; // set FilterGraph hr = pCaptureGraphBuilder2->SetFiltergraph(pGraphBuilder); if (FAILED(hr)) goto ReleaseDataAndFail; // get MediaControl interface hr = pGraphBuilder->QueryInterface(IID_IMediaControl, (LPVOID *)&pMediaControl); if (FAILED(hr)) goto ReleaseDataAndFail; // Add filters hr = pGraphBuilder->AddFilter(pDeviceFilter, L"Device Filter"); if (FAILED(hr)) goto ReleaseDataAndFail; hr = pGraphBuilder->AddFilter(pGrabberF, L"Sample Grabber"); if (FAILED(hr)) goto ReleaseDataAndFail; // Set sampe grabber options AM_MEDIA_TYPE mt; ZeroMemory(&mt, sizeof(AM_MEDIA_TYPE)); mt.majortype = MEDIATYPE_Video; mt.subtype = MEDIASUBTYPE_RGB32; hr = pGrabber->SetMediaType(&mt); if (FAILED(hr)) goto ReleaseDataAndFail; hr = pGrabber->SetOneShot(FALSE); if (FAILED(hr)) goto ReleaseDataAndFail; hr = pGrabber->SetBufferSamples(TRUE); if (FAILED(hr)) goto ReleaseDataAndFail; // Get stream config interface hr = pCaptureGraphBuilder2->FindInterface(NULL, &MEDIATYPE_Video, pDeviceFilter, IID_IAMStreamConfig, (void **)&pStreamConfig); if (FAILED(hr)) goto ReleaseDataAndFail; int streamCapsCount = 0, capsSize, bestFit = -1, bestFitPixelDiff = 1000000000, desiredPixelCount = _width * _height, bestFitWidth = 0, bestFitHeight = 0; float desiredAspectRatio = (float)_width / (float)_height; hr = pStreamConfig->GetNumberOfCapabilities(&streamCapsCount, &capsSize); if (FAILED(hr)) goto ReleaseDataAndFail; videoCaps = (BYTE *)malloc(capsSize * streamCapsCount); mediaTypeArray = (AM_MEDIA_TYPE **)malloc(sizeof(AM_MEDIA_TYPE *) * streamCapsCount); for (int i = 0; i < streamCapsCount; i++) { hr = pStreamConfig->GetStreamCaps(i, &mediaTypeArray[i], videoCaps + capsSize * i); if (FAILED(hr)) continue; VIDEO_STREAM_CONFIG_CAPS *currentVideoCaps = (VIDEO_STREAM_CONFIG_CAPS *)(videoCaps + capsSize * i); int closestWidth = MAX(currentVideoCaps-&gt;MinOutputSize.cx, MIN(currentVideoCaps-&gt;MaxOutputSize.cx, width)); int closestHeight = MAX(currentVideoCaps-&gt;MinOutputSize.cy, MIN(currentVideoCaps-&gt;MaxOutputSize.cy, height)); int pixelDiff = ABS(desiredPixelCount - closestWidth * closestHeight); if (pixelDiff &lt; bestFitPixelDiff &amp;&amp; ABS(desiredAspectRatio - (float)closestWidth / (float)closestHeight) &lt; 0.1f) { bestFit = i; bestFitPixelDiff = pixelDiff; bestFitWidth = closestWidth; bestFitHeight = closestHeight; } } if (bestFit == -1) goto ReleaseDataAndFail; AM_MEDIA_TYPE *mediaType; hr = pStreamConfig->GetFormat(&mediaType); if (FAILED(hr)) goto ReleaseDataAndFail; VIDEOINFOHEADER *videoInfoHeader = (VIDEOINFOHEADER *)mediaType->pbFormat; videoInfoHeader->bmiHeader.biWidth = bestFitWidth; videoInfoHeader->bmiHeader.biHeight = bestFitHeight; //mediaType->subtype = MEDIASUBTYPE_RGB32; hr = pStreamConfig->SetFormat(mediaType); if (FAILED(hr)) goto ReleaseDataAndFail; pStreamConfig->Release(); pStreamConfig = NULL; free(videoCaps); videoCaps = NULL; free(mediaTypeArray); mediaTypeArray = NULL; // Connect pins IPin *pDeviceOut = NULL, *pGrabberIn = NULL; if (FindPin(pDeviceFilter, PINDIR_OUTPUT, 0, &pDeviceOut) && FindPin(pGrabberF, PINDIR_INPUT, 0, &pGrabberIn)) { hr = pGraphBuilder->Connect(pDeviceOut, pGrabberIn); if (FAILED(hr)) goto ReleaseDataAndFail; } else { goto ReleaseDataAndFail; } // start playing hr = pMediaControl->Run(); if (FAILED(hr)) goto ReleaseDataAndFail; hr = pGrabber->GetConnectedMediaType(&mt); // Set dimensions width = bestFitWidth; height = bestFitHeight; _width = bestFitWidth; _height = bestFitHeight; // Allocate pixel buffer pPixelBuffer = (unsigned *)malloc(width * height * 4); // Release objects pGraphBuilder->Release(); pGraphBuilder = NULL; pEnumMoniker->Release(); pEnumMoniker = NULL; pCreateDevEnum->Release(); pCreateDevEnum = NULL; return true;

    Read the article

  • XSL using apply templates and match instead of call template

    - by AdRock
    I am trying to make the transition from using call-template to using applay templates and match but i'm not getting any data displayed only what is between the volunteer tags. When i use call template it works fine but it was suggested that i use applay-templates and match and not it doesn't work Any ideas how to make this work? I can then applay it to all my stylesheets. <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:key name="volunteers-by-region" match="volunteer" use="region" /> <xsl:template name="hoo" match="/"> <html> <head> <title>Registered Volunteers</title> <link rel="stylesheet" type="text/css" href="volunteer.css" /> </head> <body> <h1>Registered Volunteers</h1> <h3>Ordered by the username ascending</h3> <h3>Grouped by the region</h3> <xsl:for-each select="folktask/member[user/account/userlevel='2']"> <xsl:for-each select="volunteer[count(. | key('volunteers-by-region', region)[1]) = 1]"> <xsl:sort select="region" /> <xsl:for-each select="key('volunteers-by-region', region)"> <xsl:sort select="folktask/member/user/personal/name" /> <div class="userdiv"> <xsl:apply-templates/> <!--<xsl:call-template name="member_userid"> <xsl:with-param name="myid" select="../user/@id" /> </xsl:call-template> <xsl:call-template name="member_name"> <xsl:with-param name="myname" select="../user/personal/name" /> </xsl:call-template>--> </div> </xsl:for-each> </xsl:for-each> </xsl:for-each> <xsl:if test="position()=last()"> <div class="count"><h2>Total number of volunteers: <xsl:value-of select="count(/folktask/member/user/account/userlevel[text()=2])"/></h2></div> </xsl:if> </body> </html> </xsl:template> <xsl:template match="folktask/member"> <xsl:apply-templates select="user/@id"/> <xsl:apply-templates select="user/personal/name"/> </xsl:template> <xsl:template match="user/@id"> <div class="heading bold"><h2>USER ID: <xsl:value-of select="." /></h2></div> </xsl:template> <xsl:template match="user/personal/name"> <div class="small bold">NAME:</div> <div class="large"><xsl:value-of select="." /></div> </xsl:template> </xsl:stylesheet> and my xml file <folktask xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xs:noNamespaceSchemaLocation="folktask.xsd"> <member> <user id="1"> <personal> <name>Abbie Hunt</name> <sex>Female</sex> <address1>108 Access Road</address1> <address2></address2> <city>Wells</city> <county>Somerset</county> <postcode>BA5 8GH</postcode> <telephone>01528927616</telephone> <mobile>07085252492</mobile> <email>[email protected]</email> </personal> <account> <username>AdRock</username> <password>269eb625e2f0cf6fae9a29434c12a89f</password> <userlevel>4</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="1"> <roles></roles> <region>South West</region> </volunteer> </member> <member> <user id="2"> <personal> <name>Aidan Harris</name> <sex>Male</sex> <address1>103 Aiken Street</address1> <address2></address2> <city>Chichester</city> <county>Sussex</county> <postcode>PO19 4DS</postcode> <telephone>01905149894</telephone> <mobile>07784467941</mobile> <email>[email protected]</email> </personal> <account> <username>AmbientExpert</username> <password>8e64214160e9dd14ae2a6d9f700004a6</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="2"> <roles>Van Driver</roles> <region>South Central</region> </volunteer> </member> <member> <user id="3"> <personal> <name>Skye Saunders</name> <sex>Female</sex> <address1>31 Anns Court</address1> <address2></address2> <city>Cirencester</city> <county>Gloucestershire</county> <postcode>GL7 1JG</postcode> <telephone>01958303514</telephone> <mobile>07260491667</mobile> <email>[email protected]</email> </personal> <account> <username>BigUndecided</username> <password>ea297847f80e046ca24a8621f4068594</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="3"> <roles>Scaffold Erector</roles> <region>South West</region> </volunteer> </member> </folktask>

    Read the article

< Previous Page | 811 812 813 814 815 816 817 818 819 820  | Next Page >