Search Results

Search found 4509 results on 181 pages for 'scope chain'.

Page 180/181 | < Previous Page | 176 177 178 179 180 181  | Next Page >

  • How and where to implement basic authentication in Kibana 3

    - by Jabb
    I have put my elasticsearch server behind a Apache reverse proxy that provides basic authentication. Authenticating to Apache directly from the browser works fine. However, when I use Kibana 3 to access the server, I receive authentication errors. Obviously because no auth headers are sent along with Kibana's Ajax calls. I added the below to elastic-angular-client.js in the Kibana vendor directory to implement authentication quick and dirty. But for some reason it does not work. $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); What is the best approach and place to implement basic authentication in Kibana? /*! elastic.js - v1.1.1 - 2013-05-24 * https://github.com/fullscale/elastic.js * Copyright (c) 2013 FullScale Labs, LLC; Licensed MIT */ /*jshint browser:true */ /*global angular:true */ 'use strict'; /* Angular.js service wrapping the elastic.js API. This module can simply be injected into your angular controllers. */ angular.module('elasticjs.service', []) .factory('ejsResource', ['$http', function ($http) { return function (config) { var // use existing ejs object if it exists ejs = window.ejs || {}, /* results are returned as a promise */ promiseThen = function (httpPromise, successcb, errorcb) { return httpPromise.then(function (response) { (successcb || angular.noop)(response.data); return response.data; }, function (response) { (errorcb || angular.noop)(response.data); return response.data; }); }; // check if we have a config object // if not, we have the server url so // we convert it to a config object if (config !== Object(config)) { config = {server: config}; } // set url to empty string if it was not specified if (config.server == null) { config.server = ''; } /* implement the elastic.js client interface for angular */ ejs.client = { server: function (s) { if (s == null) { return config.server; } config.server = s; return this; }, post: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); console.log($http.defaults.headers); path = config.server + path; var reqConfig = {url: path, data: data, method: 'POST'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, get: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on get request, data will be request params var reqConfig = {url: path, params: data, method: 'GET'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, put: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'PUT'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, del: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'DELETE'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, head: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on HEAD request, data will be request params var reqConfig = {url: path, params: data, method: 'HEAD'}; return $http(angular.extend(reqConfig, config)) .then(function (response) { (successcb || angular.noop)(response.headers()); return response.headers(); }, function (response) { (errorcb || angular.noop)(undefined); return undefined; }); } }; return ejs; }; }]); UPDATE 1: I implemented Matts suggestion. However, the server returns a weird response. It seems that the authorization header is not working. Could it have to do with the fact, that I am running Kibana on port 81 and elasticsearch on 8181? OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.46.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.46.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache This is the response HTTP/1.1 401 Authorization Required Date: Fri, 08 Nov 2013 23:47:02 GMT WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 UPDATE 2: Updated all instances with the modified headers in these Kibana files root@localhost:/var/www/kibana# grep -r 'ejsResource(' . ./src/app/controllers/dash.js: $scope.ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/querySrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/filterSrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/dashboard.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); And modified my vhost conf for the reverse proxy like this <VirtualHost *:8181> ProxyRequests Off ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/cake2.2.4/.htpasswd Require valid-user Header always set Access-Control-Allow-Methods "GET, POST, DELETE, OPTIONS, PUT" Header always set Access-Control-Allow-Headers "Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization" Header always set Access-Control-Allow-Credentials "true" Header always set Cache-Control "max-age=0" Header always set Access-Control-Allow-Origin * </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost> Apache sends back the new response headers but the request header still seems to be wrong somewhere. Authentication just doesn't work. Request Headers OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.26.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.26.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache Response Headers HTTP/1.1 401 Authorization Required Date: Sat, 09 Nov 2013 08:48:48 GMT Access-Control-Allow-Methods: GET, POST, DELETE, OPTIONS, PUT Access-Control-Allow-Headers: Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization Access-Control-Allow-Credentials: true Cache-Control: max-age=0 Access-Control-Allow-Origin: * WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 SOLUTION: After doing some more research, I found out that this is definitely a configuration issue with regard to CORS. There are quite a few posts available regarding that topic but it appears that in order to solve my problem, it would be necessary to to make some very granular configurations on apache and also make sure that the right stuff is sent from the browser. So I reconsidered the strategy and found a much simpler solution. Just modify the vhost reverse proxy config to move the elastisearch server AND kibana on the same http port. This also adds even better security to Kibana. This is what I did: <VirtualHost *:8181> ProxyRequests Off ProxyPass /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPassReverse /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/.htpasswd Require valid-user </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost>

    Read the article

  • Why do I get a connection error / timeout when using python suds to connect to Microsoft CRM?

    - by Chris R
    When I try to connect to an MS CRM web service using suds/python-ntlm, I am getting a timeout on requests. However, the code that I'm trying to replace -- which calls out to the cURL command line app to do the same call -- succeeds. Clearly something is different in the way that cURL is sending the command data, but I'll be damned if I know what the difference is. Below are the full details of the various calls. Anyone got any tips? Here's the code that is making the request, followed by the output. The cURL command code is below that, and its response follows. Hosts, users, and passwords have been changed to protect the innocent, of course. wsdl_url = 'https://client.service.host/MSCrmServices/2007/MetadataService.asmx?WSDL' username = r'domain\user.name' password = 'userpass' from suds.transport.https import WindowsHttpAuthenticated from suds.client import Client import logging logging.basicConfig(level=logging.INFO) logging.getLogger('suds.client').setLevel(logging.DEBUG) logging.getLogger('suds.transport').setLevel(logging.DEBUG) ntlmTransport = WindowsHttpAuthenticated(username=username, password=password) metadata_client = Client(wsdl_url, transport=ntlmTransport) request = metadata_client.factory.create('RetrieveAttributeRequest') request.MetadataId = '00000000-0000-0000-0000-000000000000' request.EntityLogicalName = 'opportunity' request.LogicalName = 'new_typeofcontact' request.RetrieveAsIfPublished = 'false' attr = metadata_client.service.Execute(request) print attr Here's the output: DEBUG:suds.client:sending to (http://client.service.host/MSCrmServices/2007/MetadataService.asmx) message: <SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.microsoft.com/crm/2007/WebServices" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header/> <ns0:Body> <ns1:Execute> <ns1:Request xsi:type="ns1:RetrieveAttributeRequest"> <ns1:MetadataId>00000000-0000-0000-0000-000000000000</ns1:MetadataId> <ns1:EntityLogicalName>opportunity</ns1:EntityLogicalName> <ns1:LogicalName>new_typeofcontact</ns1:LogicalName> <ns1:RetrieveAsIfPublished>false</ns1:RetrieveAsIfPublished> </ns1:Request> </ns1:Execute> </ns0:Body> </SOAP-ENV:Envelope> DEBUG:suds.client:headers = {'SOAPAction': u'"http://schemas.microsoft.com/crm/2007/WebServices/Execute"', 'Content-Type': 'text/xml'} DEBUG:suds.transport.http:sending: URL:http://client.service.host/MSCrmServices/2007/MetadataService.asmx HEADERS: {'SOAPAction': u'"http://schemas.microsoft.com/crm/2007/WebServices/Execute"', 'Content-Type': 'text/xml', 'Content-type': 'text/xml', 'Soapaction': u'"http://schemas.microsoft.com/crm/2007/WebServices/Execute"'} MESSAGE: <SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.microsoft.com/crm/2007/WebServices" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header/> <ns0:Body> <ns1:Execute> <ns1:Request xsi:type="ns1:RetrieveAttributeRequest"> <ns1:MetadataId>00000000-0000-0000-0000-000000000000</ns1:MetadataId> <ns1:EntityLogicalName>opportunity</ns1:EntityLogicalName> <ns1:LogicalName>new_typeofcontact</ns1:LogicalName> <ns1:RetrieveAsIfPublished>false</ns1:RetrieveAsIfPublished> </ns1:Request> </ns1:Execute> </ns0:Body> </SOAP-ENV:Envelope> ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (16, 0)) --------------------------------------------------------------------------- URLError Traceback (most recent call last) /Users/crose/projects/2366/crm/<ipython console> in <module>() /var/folders/nb/nbJAzxR1HbOppPcs6xO+dE+++TY/-Tmp-/python-67186icm.py in <module>() 19 request.LogicalName = 'new_typeofcontact' 20 request.RetrieveAsIfPublished = 'false' 21 ---> 22 attr = metadata_client.service.Execute(request) 23 print attr /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/client.pyc in __call__(self, *args, **kwargs) 537 return (500, e) 538 else: --> 539 return client.invoke(args, kwargs) 540 541 def faults(self): /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/client.pyc in invoke(self, args, kwargs) 596 self.method.name, timer) 597 timer.start() --> 598 result = self.send(msg) 599 timer.stop() 600 metrics.log.debug( /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/client.pyc in send(self, msg) 621 request = Request(location, str(msg)) 622 request.headers = self.headers() --> 623 reply = transport.send(request) 624 if retxml: 625 result = reply.message /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/transport/https.pyc in send(self, request) 62 def send(self, request): 63 self.addcredentials(request) ---> 64 return HttpTransport.send(self, request) 65 66 def addcredentials(self, request): /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/transport/http.pyc in send(self, request) 75 request.headers.update(u2request.headers) 76 log.debug('sending:\n%s', request) ---> 77 fp = self.u2open(u2request) 78 self.getcookies(fp, u2request) 79 result = Reply(200, fp.headers.dict, fp.read()) /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/transport/http.pyc in u2open(self, u2request) 116 return url.open(u2request) 117 else: --> 118 return url.open(u2request, timeout=tm) 119 120 def u2opener(self): /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in open(self, fullurl, data, timeout) 381 req = meth(req) 382 --> 383 response = self._open(req, data) 384 385 # post-process response /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in _open(self, req, data) 399 protocol = req.get_type() 400 result = self._call_chain(self.handle_open, protocol, protocol + --> 401 '_open', req) 402 if result: 403 return result /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in _call_chain(self, chain, kind, meth_name, *args) 359 func = getattr(handler, meth_name) 360 --> 361 result = func(*args) 362 if result is not None: 363 return result /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in http_open(self, req) 1128 1129 def http_open(self, req): -> 1130 return self.do_open(httplib.HTTPConnection, req) 1131 1132 http_request = AbstractHTTPHandler.do_request_ /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in do_open(self, http_class, req) 1103 r = h.getresponse() 1104 except socket.error, err: # XXX what error? -> 1105 raise URLError(err) 1106 1107 # Pick apart the HTTPResponse object to get the addinfourl URLError: <urlopen error [Errno 60] Operation timed out> The cURL command is: /opt/local/bin/curl --ntlm -u "domain\user.name:userpass" -k -d @- -A "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.1)" -H "Connection: Keep-Alive" -H "Content-Type: text/xml; charset=utf-8" -H "SOAPAction: http://schemas.microsoft.com/crm/2007/WebServices/Execute" https://client.service.host/MSCrmServices/2007/MetadataService.asmx The data that is piped to that cURL command: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Header> <CrmAuthenticationToken xmlns="http://schemas.microsoft.com/crm/2007/WebServices"> <AuthenticationType xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes">0</AuthenticationType> <CrmTicket xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes"></CrmTicket> <OrganizationName xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes">CMIFS</OrganizationName> <CallerId xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes">00000000-0000-0000-0000-000000000000</CallerId> </CrmAuthenticationToken> </soap:Header> <soap:Body> <Execute xmlns="http://schemas.microsoft.com/crm/2007/WebServices"> <Request xsi:type="RetrieveAttributeRequest"> <MetadataId>00000000-0000-0000-0000-000000000000</MetadataId> <EntityLogicalName>opportunity</EntityLogicalName> <LogicalName>new_typeofcontact</LogicalName> <RetrieveAsIfPublished>false</RetrieveAsIfPublished> </Request> </Execute> </soap:Body> </soap:Envelope> Here's the response: <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <ExecuteResponse xmlns="http://schemas.microsoft.com/crm/2007/WebServices"> <Response xsi:type="RetrieveAttributeResponse"> <AttributeMetadata xsi:type="PicklistAttributeMetadata"> <MetadataId>101346cf-a6af-4eb4-a4bf-9c3c6bbd6582</MetadataId> <SchemaName>New_TypeofContact</SchemaName> <LogicalName>new_typeofcontact</LogicalName> <EntityLogicalName>opportunity</EntityLogicalName> <AttributeType> <Value>Picklist</Value> </AttributeType> <!-- stuff here --> </AttributeMetadata> </Response> </ExecuteResponse> </soap:Body> </soap:Envelope>

    Read the article

  • OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never

    Read the article

  • Impossible to do POSTs with appengine-jruby/RoR: Reflection is not allowed

    - by Joel Cuevas
    I'm trying to build a site with RoR on Google App Engine. I'm using the google-appengine gem (http://appengine-jruby.googlecode.com) and following the instructions in (http://gist.github.com/268192). The problem is that I can't submit ANY form! I've already tried this in two diferent clean Win 7 Pro envs and the result is the same. After install Ruby 1.8.6 (One-Click Installer): 1. gem update --system 2. gem install rails 3. gem install google-appengine 4. gem install rails_dm_datastore 5. gem install activerecord-nulldb-adapter 6. curl -O http://appengine-jruby.googlecode.com/hg/demos/rails2/rails2_appengine.rb 7. ruby rails2_appengine.rb (previously downloaded) 8. rails myproj 9. chmod myproj 10. ruby script/generate dd_model MyModel f1:string f2:float f3:float f4:float f5:integer f6:integer f7:integer -f 11. ruby script/generate scaffold MyModel f1:string f2:float f3:float f4:float f5:integer f6:integer f7:integer -f --skip-migration 12. dev_appserver.rb -p 3000 . At this point, I manually test the scaffold in (http://localhost:3000/my_models). The index is OK, then I create a new registry with the generated form, everything's fine, but when I try to create a second one, I get a "java.lang.RuntimeException: DummyDynamicScope should never be used for backref storage" in the console. As far as I read this is a won't-fix behavior in JRuby 1.4.1, but it's converted to a debug only warning in 1.5.0, so I proceed to install the pre release. 13. gem install appengine-jruby-jars --pre With this, that exception is solved and everything works great... until I move the project to the GAE server. 14. ruby appcfg.rb update . And now, in (http://myproj.appspot.com/my_models), again, the index is fine, also the new form, but in the moment that I submit it with valid data, I get a 500 error: "java.lang.IllegalAccessException: Reflection is not allowed on public int". As I said, this behavior is not present in the local SDK. In both cases, I'm completely unable to post anything. This is what I have right now in the GAE environment: Ruby version 1.8.7 (java) RubyGems disabled Rack version 1.1 Rails version 2.3.5 Action Pack version 2.3.5 Active Support version 2.3.5 DataMapper version 0.10.2 Environment production JRuby Runtime version 1.5.0.pre JRuby-Rack version 0.9.7 AppEngine SDK version Google App Engine/1.3.3 AppEngine APIs version 0.0.15 And this are my intalled gems: actionmailer (2.3.5) actionpack (2.3.5) activerecord (2.3.5) activerecord-nulldb-adapter (0.2.0) activeresource (2.3.5) activesupport (2.3.5) addressable (2.1.2) appengine-apis (0.0.15) appengine-jruby-jars (0.0.8.pre, 0.0.7) appengine-rack (0.0.8) appengine-sdk (1.3.3.1) appengine-tools (0.0.12) bundler08 (0.8.5) dm-appengine (0.0.8) dm-ar-finders (0.10.2) dm-core (0.10.2) dm-timestamps (0.10.2) dm-validations (0.10.2) extlib (0.9.14) fxri (0.3.7, 0.3.6) google-appengine (0.0.12) hpricot (0.8.2 x86-mswin32, 0.6 mswin32) jruby-rack (0.9.8, 0.9.7) log4r (1.1.7, 1.0.5) rack (1.1.0, 1.0.1) rails (2.3.5) rails_appengine (0.0.3) rails_dm_datastore (0.2.9) rake (0.8.7, 0.7.3) rubygems-update (1.3.7, 1.3.6) rubyzip (0.9.4) sources (0.0.1) win32-api (1.4.6 x86-mswin32-60, 1.0.4 mswin32) win32-clipboard (0.5.2, 0.4.3) win32-dir (0.3.6, 0.3.2) win32-eventlog (0.5.2, 0.4.6) win32-file (0.6.3, 0.5.4) win32-file-stat (1.3.4, 1.2.7) win32-process (0.6.2, 0.5.3) win32-sapi (0.1.5, 0.1.4) win32-sound (0.4.2, 0.4.1) windows-api (0.4.0, 0.2.0) windows-pr (1.0.9, 0.7.2) I'm unable to attach the full logs of the exceptions because of the character limits, but I can provide them under request. Here's an abstract of them: DummyDynamicScope (dev and prod envs): 14-may-2010 7:18:40 com.google.appengine.tools.development.ApiProxyLocalImpl log SEVERE: [1273821520195000] javax.servlet.ServletContext log: Application Error java.lang.RuntimeException: DummyDynamicScope should never be used for backref storage at org.jruby.runtime.scope.DummyDynamicScope.getBackRef(DummyDynamicScope.java:49) at org.jruby.RubyRegexp.updateBackRef(RubyRegexp.java:1404) at org.jruby.RubyRegexp.updateBackRef(RubyRegexp.java:1396) at org.jruby.RubyRegexp.search(RubyRegexp.java:1386) at org.jruby.RubyRegexp.op_match(RubyRegexp.java:1301) at org.jruby.RubyString.op_match(RubyString.java:1446) at org.jruby.RubyString$i_method_1_0$RUBYINVOKER$op_match.call(org/jruby/RubyString$i_method_1_0$RUBYINVOKER$op_match.gen) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodOneOrN.call(JavaMethod.java:721) at org.jruby.RubyClass.finvoke(RubyClass.java:472) at org.jruby.RubyObject.send(RubyObject.java:1442) at org.jruby.RubyObject$i_method_multi$RUBYINVOKER$send.call(org/jruby/RubyObject$i_method_multi$RUBYINVOKER$send.gen) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrOneOrTwoOrNBlock.call(JavaMethod.java:276) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:330) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:189) at ruby.jit.ruby.C_3a_.Desarrollo.AppEngine.gorgory.WEB_minus_INF.lib.gems_dot_jar.bundler_gems.jruby.$1_dot_8.gems.dm_minus_validations_minus_0_dot_10_dot_2.lib.dm_minus_validations.validators.numeric_validator.validate_with_comparison at ruby.jit.ruby.C_3a_.Desarrollo.AppEngine.gorgory.WEB_minus_INF.lib.gems_dot_jar.bundler_gems.jruby.$1_dot_8.gems.dm_minus_validations_minus_0_dot_10_dot_2.lib.dm_minus_validations.validators.numeric_validator.validate_with_comparison at org.jruby.internal.runtime.methods.JittedMethod.call(JittedMethod.java:102) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:144) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:280) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:69) at org.jruby.ast.FCallManyArgsNode.interpret(FCallManyArgsNode.java:60) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:229) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:193) at org.jruby.RubyClass.finvoke(RubyClass.java:491) at org.jruby.RubyObject.send(RubyObject.java:1448) at org.jruby.RubyObject$i_method_multi$RUBYINVOKER$send.call(org/jruby/RubyObject$i_method_multi$RUBYINVOKER$send.gen) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrOneOrTwoOrThreeOrNBlock.call(JavaMethod.java:293) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:350) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:229) at ruby.jit.ruby.C_3a_.Desarrollo.AppEngine.gorgory.WEB_minus_INF.lib.gems_dot_jar.bundler_gems.jruby.$1_dot_8.gems.dm_minus_validations_minus_0_dot_10_dot_2.lib.dm_minus_validations.validators.numeric_validator.validate_with28985350_50 at ruby.jit.ruby.C_3a_.Desarrollo.AppEngine.gorgory.WEB_minus_INF.lib.gems_dot_jar.bundler_gems.jruby.$1_dot_8.gems.dm_minus_validations_minus_0_dot_10_dot_2.lib.dm_minus_validations.validators.numeric_validator.validate_with28985350_50 at org.jruby.internal.runtime.methods.JittedMethod.call(JittedMethod.java:221) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:201) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:227) at org.jruby.ast.FCallThreeArgNode.interpret(FCallThreeArgNode.java:40) Reflection (only prod env): Java::JavaLang::SecurityException (java.lang.IllegalAccessException: Reflection is not allowed on public int java.lang.String$CaseInsensitiveComparator.compare(java.lang.String,java.lang.String)): com.google.appengine.runtime.Request.process-92563a0605f433ea(Request.java) java.lang.reflect.AccessibleObject.setAccessible(AccessibleObject.java:40) org.jruby.javasupport.JavaMethod.<init>(JavaMethod.java:176) org.jruby.javasupport.JavaMethod.create(JavaMethod.java:183) org.jruby.java.invokers.MethodInvoker.createCallable(MethodInvoker.java:23) org.jruby.java.invokers.RubyToJavaInvoker.<init>(RubyToJavaInvoker.java:63) org.jruby.java.invokers.MethodInvoker.<init>(MethodInvoker.java:13) org.jruby.java.invokers.InstanceMethodInvoker.<init>(InstanceMethodInvoker.java:15) org.jruby.javasupport.JavaClass$InstanceMethodInvokerInstaller.install(JavaClass.java:339) org.jruby.javasupport.JavaClass.installClassMethods(JavaClass.java:723) org.jruby.javasupport.JavaClass.setupProxy(JavaClass.java:586) org.jruby.javasupport.Java.createProxyClass(Java.java:506) org.jruby.javasupport.Java.getProxyClass(Java.java:445) org.jruby.javasupport.Java.getInstance(Java.java:354) org.jruby.javasupport.JavaUtil.convertJavaToUsableRubyObject(JavaUtil.java:143) org.jruby.javasupport.JavaClass$ConstantField.install(JavaClass.java:360) org.jruby.javasupport.JavaClass.installClassFields(JavaClass.java:711) org.jruby.javasupport.JavaClass.setupProxy(JavaClass.java:585) org.jruby.javasupport.Java.createProxyClass(Java.java:506) org.jruby.javasupport.Java.getProxyClass(Java.java:445) org.jruby.javasupport.Java.getProxyOrPackageUnderPackage(Java.java:885) org.jruby.javasupport.Java.get_proxy_or_package_under_package(Java.java:918) org.jruby.javasupport.JavaUtilities.get_proxy_or_package_under_package(JavaUtilities.java:54) org.jruby.javasupport.JavaUtilities$s_method_2_0$RUBYINVOKER$get_proxy_or_package_under_package.call(org/jruby/javasupport/JavaUtilities$s_method_2_0$RUBYINVOKER$get_proxy_or_package_under_package.gen:65535) org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:329) org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:188) org.jruby.ast.CallTwoArgNode.interpret(CallTwoArgNode.java:59) org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) org.jruby.ast.BlockNode.interpret(BlockNode.java:71) org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:113) org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:138) org.jruby.javasupport.util.RuntimeHelpers$MethodMissingMethod.call(RuntimeHelpers.java:389) org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:182) What should I do now? Any hint would be wellcome. Thanks!

    Read the article

  • calling and killing a parent function with onmouseover and onmouseout events

    - by Zoolu
    I want to call the function upon the onmouseover="ParentFunction();" then kill it onmouseout="killParent();". Note: in my code the parent function is called initiate(); and the killer function is called reset(); which lies outside the parent function at the bottom of the script. I don't know how to kill the intitiate() function my first guess was: var reset = function(){ return initiate(); }; here's my source code: any suggestions and help appreciated. <!doctype html> <html> <head> <title> function/event prototype </title> <link rel="stylesheet" type="text/css" href="styling.css" /> </head> <body> <h2> <em>Fantastical place<br/>prototype</em> </h2> <div id="button-container"> <div id="button-box"> <button id="activate" onmouseover="initiate()" onmouseout="reset();" width="50px" height="50px" title="Activate"> </button> </div> <div id="text-box"> </div> </div> <div id="container"> <canvas id="playground" width="200px" height="250px"> </canvas> <canvas id="face" width="400px" height="200px"> </canvas> <!-- <div id="clear"> </div> --> </div> <script> alert("Welcome, there are x entries as of" +""+new Date().getHours()); //global scope var i=0; var c1 = []; //c is short for collect var c2 = []; var c3 = []; var c4 = []; var c5 = []; var c6 = []; var initiate = function(){ //the button that triggers the program var timer = setInterval(function(){clock()},90); //copy this block for ref. function clock(){ i+=1; var a = Math.round(Math.random()*200); var b = Math.round(Math.random()*250); var c = Math.round(Math.random()*200); var d = Math.round(Math.random()*250); var e = Math.round(Math.random()*200); var f = Math.round(Math.random()*250); c1.push(a); c2.push(b); c3.push(c); c4.push(d); c5.push(e); c6.push(f); // document.write(i); var c = document.getElementById("playground"); var ctx = c.getContext("2d"); ctx.beginPath(); ctx.moveTo(c3[i-2], c4[i-2]); ctx.bezierCurveTo(c1[i-2],c2[i-2],c5[i-2],c6[i-2],c3[i-1], c4[i-1]); // ctx.lineTo(c3[i-1], c4[i-1]); if(a<200){ ctx.strokeStyle="#FF33CC"; } else if(a<400){ ctx.strokeStyle="#FF33aa"; } else{ ctx.strokeStyle="#FF3388"; } ctx.stroke(); document.getElementById("text-box").innerHTML=i+"<p>Thoughts.</p>"; if(i===20){ //alert("15 reached"); clearInterval(timer);//to clearInterval must be using a global scoped variable. return; } }; //end of clock //setInterval(clock,150); var targetFace = document.getElementById("face"); var face = targetFace.getContext("2d"); var faceTimer = setInterval(function(){faceAnim()},80); //copy this block for ref. global scoped. function faceAnim(){ face.beginPath(); face.strokeStyle="#FF33CC"; face.moveTo(100,104); //eye line face.bezierCurveTo(150,125,250,125,300,104); face.moveTo(200,1); //centre line face.lineTo(200,400); face.moveTo(125,111);//left eye lid face.bezierCurveTo(160,135,170,130,185,120); face.moveTo(150,116);//left eye face.bezierCurveTo(155,125,165,125,170,118); face.moveTo(275,111);//right eye lid face.bezierCurveTo(240,135,230,130,215,120); face.moveTo(250,116);//right eye face.bezierCurveTo(245,125,235,125,230,118); face.moveTo(195, 118); //left nose face.lineTo(190, 160); face.lineTo(200,170); face.moveTo(190,160); //left nostroll face.lineTo(180,160); face.lineTo(191,154); face.moveTo(180,160); //left lower nostrol face.lineTo(200,170); face.moveTo(205, 118); //right nose face.lineTo(210, 160); face.lineTo(200,170); face.moveTo(210,160); //right nostroll face.lineTo(220,160); face.lineTo(209,154); face.moveTo(220,160); //right lower nostrol face.lineTo(200,170); face.moveTo(200,140); //outer triad face.lineTo(170, 100); face.lineTo(230, 100); face.lineTo(200, 140); face.moveTo(200,145); //outer triad drop shadow face.lineTo(170, 100); face.lineTo(230, 100); face.lineTo(200, 145); face.moveTo(200,130); //inner triad face.lineTo(180, 105); face.lineTo(220, 105); face.lineTo(200, 130); //face.lineWidth =0.6; face.moveTo(280,111);//outer right eye lid face.bezierCurveTo(240,140,230,135,210,120); face.moveTo(120,111);//outer left eye lid face.bezierCurveTo(160,140,170,135,190,120); face.moveTo(162,174); //upper mouth line face.bezierCurveTo(170,180,230,180,238,174); face.moveTo(165,175); //mouth line bottom face.bezierCurveTo(190,Math.floor(Math.random()*25+180),210,Math.floor(Math.random()*25+180),235,175); face.moveTo(232,204); //head shape face.lineTo(340, 20); face.moveTo(168,204); //head shape face.lineTo(60, 20); face.stroke(); //exicute all co-ords. }; //end of face anim var clearFace = function(){ document.getElementById('face').getContext('2d').clearRect(0, 0, 700, 750); }; setInterval(clearFace,90); }; //end of parent function var reset = function(){ document.getElementById('playground').getContext('2d').clearRect(0, 0, 700, 750); //clearInterval(faceTimer); //delete initiate(); }; </script> </body> </html>

    Read the article

  • problem in concurrent web services

    - by user548750
    Hi All I have developed a web services. I am getting problem when two different user are trying to access web services concurrently. In web services two methods are there setInputParameter getUserService suppose Time User Operation 10:10 am user1 setInputParameter 10:15 am user2 setInputParameter 10:20 am user1 getUserService User1 is getting result according to the input parameter seted by user2 not by ( him own ) I am using axis2 1.4 ,eclipse ant build, My services are goes here User class service class service.xml build file testclass package com.jimmy.pojo; public class User { private String firstName; private String lastName; private String[] addressCity; public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String[] getAddressCity() { return addressCity; } public void setAddressCity(String[] addressCity) { this.addressCity = addressCity; } } [/code] [code=java]package com.jimmy.service; import com.jimmy.pojo.User; public class UserService { private User user; public void setInputParameter(User userInput) { user = userInput; } public User getUserService() { user.setFirstName(user.getFirstName() + " changed "); if (user.getAddressCity() == null) { user.setAddressCity(new String[] { "New City Added" }); } else { user.getAddressCity()[0] = "==========="; } return user; } } [/code] [code=java]<service name="MyWebServices" scope="application"> <description> My Web Service </description> <messageReceivers> <messageReceiver mep="http://www.w3.org/2004/08/wsdl/in-only" class="org.apache.axis2.rpc.receivers.RPCInOnlyMessageReceiver" /> <messageReceiver mep="http://www.w3.org/2004/08/wsdl/in-out" class="org.apache.axis2.rpc.receivers.RPCMessageReceiver" /> </messageReceivers> <parameter name="ServiceClass">com.jimmy.service.UserService </parameter> </service>[/code] [code=java] <project name="MyWebServices" basedir="." default="generate.service"> <property name="service.name" value="UserService" /> <property name="dest.dir" value="build" /> <property name="dest.dir.classes" value="${dest.dir}/${service.name}" /> <property name="dest.dir.lib" value="${dest.dir}/lib" /> <property name="axis2.home" value="../../" /> <property name="repository.path" value="${axis2.home}/repository" /> <path id="build.class.path"> <fileset dir="${axis2.home}/lib"> <include name="*.jar" /> </fileset> </path> <path id="client.class.path"> <fileset dir="${axis2.home}/lib"> <include name="*.jar" /> </fileset> <fileset dir="${dest.dir.lib}"> <include name="*.jar" /> </fileset> </path> <target name="clean"> <delete dir="${dest.dir}" /> <delete dir="src" includes="com/jimmy/pojo/stub/**"/> </target> <target name="prepare"> <mkdir dir="${dest.dir}" /> <mkdir dir="${dest.dir}/lib" /> <mkdir dir="${dest.dir.classes}" /> <mkdir dir="${dest.dir.classes}/META-INF" /> </target> <target name="generate.service" depends="clean,prepare"> <copy file="src/META-INF/services.xml" tofile="${dest.dir.classes}/META-INF/services.xml" overwrite="true" /> <javac srcdir="src" destdir="${dest.dir.classes}" includes="com/jimmy/service/**,com/jimmy/pojo/**"> <classpath refid="build.class.path" /> </javac> <jar basedir="${dest.dir.classes}" destfile="${dest.dir}/${service.name}.aar" /> <copy file="${dest.dir}/${service.name}.aar" tofile="${repository.path}/services/${service.name}.aar" overwrite="true" /> </target> </project> [/code] [code=java]package com.jimmy.test; import javax.xml.namespace.QName; import org.apache.axis2.AxisFault; import org.apache.axis2.addressing.EndpointReference; import org.apache.axis2.client.Options; import org.apache.axis2.rpc.client.RPCServiceClient; import com.jimmy.pojo.User; public class MyWebServices { @SuppressWarnings("unchecked") public static void main(String[] args1) throws AxisFault { RPCServiceClient serviceClient = new RPCServiceClient(); Options options = serviceClient.getOptions(); EndpointReference targetEPR = new EndpointReference( "http://localhost:8080/axis2/services/MyWebServices"); options.setTo(targetEPR); // Setting the Input Parameter QName opSetQName = new QName("http://service.jimmy.com", "setInputParameter"); User user = new User(); String[] cityList = new String[] { "Bangalore", "Mumbai" }; /* We need to set this for user 2 as user 2 */ user.setFirstName("User 1 first name"); user.setLastName("User 1 Last name"); user.setAddressCity(cityList); Object[] opSetInptArgs = new Object[] { user }; serviceClient.invokeRobust(opSetQName, opSetInptArgs); // Getting the weather QName opGetWeather = new QName("http://service.jimmy.com", "getUserService"); Object[] opGetWeatherArgs = new Object[] {}; Class[] returnTypes = new Class[] { User.class }; Object[] response = serviceClient.invokeBlocking(opGetWeather, opGetWeatherArgs, returnTypes); System.out.println("Context :"+serviceClient.getServiceContext()); User result = (User) response[0]; if (result == null) { System.out.println("User is not initialized!"); return; } else { System.out.println("*********printing result********"); String[] list =result.getAddressCity(); System.out.println(result.getFirstName()); System.out.println(result.getLastName()); for (int indx = 0; indx < list.length ; indx++) { String string = result.getAddressCity()[indx]; System.out.println(string); } } } }

    Read the article

  • How to declare a(n) vector/array of reducer objects in Cilk++?

    - by Jin
    Hi All, I had a problem when I am using Cilk++, an extension to C++ for parallel computing. I found that I can't declare a vector of reducer objects: typedef cilk::reducer_opadd<int> T_reducer; vector<T_reducer> bitmiss_vec; for (int i = 0; i < 24; ++i) { T_reducer r; bitmiss_vec.push_back(r); } However, when I compile the code with Cilk++, it complains at the push_back() line: cilk++ geneAttack.cilk -O1 -g -lcilkutil -o geneAttack /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In member function ‘void __gnu_cxx::new_allocator<_Tp>::construct(_Tp*, const _Tp&) [with _Tp = cilk::reducer_opadd<int>]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:601: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:229: error: ‘cilk::reducer_opadd<Type>::reducer_opadd(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/ext/new_allocator.h:107: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In member function ‘void std::vector<_Tp, _Alloc>::_M_insert_aux(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:229: error: ‘cilk::reducer_opadd<Type>::reducer_opadd(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:252: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:230: error: ‘cilk::reducer_opadd<Type>& cilk::reducer_opadd<Type>::operator=(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:256: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In static member function ‘static _BI2 std::__copy_backward<_BoolType, std::random_access_iterator_tag>::__copy_b(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*, bool _BoolType = false]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:465: instantiated from ‘_BI2 std::__copy_backward_aux(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:474: instantiated from ‘static _BI2 std::__copy_backward_normal<<anonymous>, <anonymous> >::__copy_b_n(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*, bool <anonymous> = false, bool <anonymous> = false]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:540: instantiated from ‘_BI2 std::copy_backward(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:253: instantiated from ‘void std::vector<_Tp, _Alloc>::_M_insert_aux(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:230: error: ‘cilk::reducer_opadd<Type>& cilk::reducer_opadd<Type>::operator=(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:433: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In function ‘void std::_Construct(_T1*, const _T2&) [with _T1 = cilk::reducer_opadd<int>, _T2 = cilk::reducer_opadd<int>]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_uninitialized.h:87: instantiated from ‘_ForwardIterator std::__uninitialized_copy_aux(_InputIterator, _InputIterator, _ForwardIterator, std::__false_type) [with _InputIterator = cilk::reducer_opadd<int>*, _ForwardIterator = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_uninitialized.h:114: instantiated from ‘_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = cilk::reducer_opadd<int>*, _ForwardIterator = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_uninitialized.h:254: instantiated from ‘_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, std::allocator<_Tp>) [with _InputIterator = cilk::reducer_opadd<int>*, _ForwardIterator = cilk::reducer_opadd<int>*, _Tp = cilk::reducer_opadd<int>]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:275: instantiated from ‘void std::vector<_Tp, _Alloc>::_M_insert_aux(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:229: error: ‘cilk::reducer_opadd<Type>::reducer_opadd(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_construct.h:81: error: within this context make: *** [geneAttack] Error 1 jinchen@galactica:~/workspace/biometrics/genAttack$ make cilk++ geneAttack.cilk -O1 -g -lcilkutil -o geneAttack geneAttack.cilk: In function ‘int cilk cilk_main(int, char**)’: geneAttack.cilk:670: error: expected primary-expression before ‘,’ token geneAttack.cilk:670: error: expected primary-expression before ‘}’ token geneAttack.cilk:674: error: ‘bitmiss_vec’ was not declared in this scope make: *** [geneAttack] Error 1 The Cilk++ manule says it supports array/vector of reducers, although there are performance issues to consider: "If you create a large number of reducers (for example, an array or vector of reducers) you must be aware that there is an overhead at steal and reduce that is proportional to the number of reducers in the program. " Anyone knows what is going on? How should I declare/use vector of reducers? Thank you

    Read the article

  • Wired component null in seam EntityHome action

    - by rangalo
    I have a custom EntityHome class. I wire the dependent entity in the wire method, but when I call the action (persist) the wired component is always null. What could be the reason, similar code generated by seam gen is apparently working. Here is the entity class. I have overrden persist method to log the value of the wired element. @Name("roundHome") @Scope(ScopeType.CONVERSATION) public class RoundHome extends EntityHome<Round>{ @In(required = false) private Golfer currentGolfer; @In(create = true) private TeeSetHome teeSetHome; @Override public String persist() { logger.info("Persist called"); if (null != getInstance().getTeeSet() ) { logger.info("teeSet not null in persist"); } else { logger.info("teeSet null in persist"); // wire(); } String retVal = super.persist(); //To change body of overridden methods use File | Settings | File Templates. return retVal; } @Logger private Log logger; public void wire() { logger.info("wire called"); TeeSet teeSet = teeSetHome.getDefinedInstance(); if (null != teeSet) { getInstance().setTeeSet(teeSet); logger.info("Successfully wired the teeSet instance with color: " + teeSet.getColor()); } } public boolean isWired() { logger.info("is wired called"); if(null == getInstance().getTeeSet()) { logger.info("wired teeSet instance is null, the button will be disabled !"); return false; } else { logger.info("wired teeSet instance is NOT null, the button will be enabled !"); logger.info("teeSet color: "+getInstance().getTeeSet().getColor()); return true; } } @RequestParameter public void setRoundId(Long id) { super.setId(id); } @Override protected Round createInstance() { Round round = super.createInstance(); round.setGolfer(currentGolfer); round.setDate(new java.sql.Date(System.currentTimeMillis())); return round; } } Here the xhtml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <ui:composition xmlns="http://www.w3.org/1999/xhtml" xmlns:s="http://jboss.com/products/seam/taglib" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com/jsf/html" xmlns:a="http://richfaces.org/a4j" xmlns:rich="http://richfaces.org/rich" template="layout/template.xhtml"> <ui:define name="body"> <h:form id="roundform"> <rich:panel> <f:facet name="header>"> #{roundHome.managed ? 'Edit' : 'Add' } Round </f:facet> <s:decorate id="dateField" template="layout/edit.xhtml"> <ui:define name="label">Date:</ui:define> <rich:calendar id="date" datePattern="dd/MM/yyyy" value="#{round.date}"/> </s:decorate> <s:decorate id="notesField" template="layout/edit.xhtml"> <ui:define name="label">Notes:</ui:define> <h:inputTextarea id="notes" cols="80" rows="3" value="#{round.notes}" /> </s:decorate> <s:decorate id="totalScoreField" template="layout/edit.xhtml"> <ui:define name="label">Total Score:</ui:define> <h:inputText id="totalScore" value="#{round.totalScore}" /> </s:decorate> <s:decorate id="weatherField" template="layout/edit.xhtml"> <ui:define name="label">Weather:</ui:define> <h:selectOneMenu id="weather" value="#{round.weather}"> <s:selectItems var="_weather" value="#{weatherCategories}" label="#{_weather.label}" noSelectionLabel=" Select " /> <s:convertEnum/> </h:selectOneMenu> </s:decorate> <div style="clear: both;"> <span class="required">*</span> required fields </div> </rich:panel> <div class="actionButtons"> <h:commandButton id="save" value="Save" action="#{roundHome.persist}" rendered="#{!roundHome.managed}" /> <!-- disabled="#{!roundHome.wired}" /> --> <h:commandButton id="update" value="Update" action="#{roundHome.update}" rendered="#{roundHome.managed}" /> <h:commandButton id="delete" value="Delete" action="#{roundHome.remove}" rendered="#{roundHome.managed}" /> <s:button id="discard" value="Discard changes" propagation="end" view="/Round.xhtml" rendered="#{roundHome.managed}" /> <s:button id="cancel" value="Cancel" propagation="end" view="/#{empty roundFrom ? 'RoundList' : roundFrom}.xhtml" rendered="#{!roundHome.managed}" /> </div> <rich:tabPanel> <rich:tab label="Tee Set"> <div class="association"> <h:outputText value="Tee set not selected" rendered="#{round.teeSet == null}" /> <rich:dataTable var="_teeSet" value="#{round.teeSet}" rendered="#{round.teeSet != null}"> <h:column> <f:facet name="header">Course</f:facet>#{_teeSet.course.name} </h:column> <h:column> <f:facet name="header">Color</f:facet>#{_teeSet.color} </h:column> <h:column> <f:facet name="header">Position</f:facet>#{_teeSet.pos} </h:column> </rich:dataTable> </div> </rich:tab> </rich:tabPanel> </h:form> </ui:define> </ui:composition>

    Read the article

  • How to declare a vector or array of reducer objects in Cilk++?

    - by Jin
    Hi All, I had a problem when I am using Cilk++, an extension to C++ for parallel computing. I found that I can't declare a vector of reducer objects: typedef cilk::reducer_opadd<int> T_reducer; vector<T_reducer> bitmiss_vec; for (int i = 0; i < 24; ++i) { T_reducer r; bitmiss_vec.push_back(r); } However, when I compile the code with Cilk++, it complains at the push_back() line: cilk++ geneAttack.cilk -O1 -g -lcilkutil -o geneAttack /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In member function ‘void __gnu_cxx::new_allocator<_Tp>::construct(_Tp*, const _Tp&) [with _Tp = cilk::reducer_opadd<int>]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:601: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:229: error: ‘cilk::reducer_opadd<Type>::reducer_opadd(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/ext/new_allocator.h:107: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In member function ‘void std::vector<_Tp, _Alloc>::_M_insert_aux(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:229: error: ‘cilk::reducer_opadd<Type>::reducer_opadd(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:252: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:230: error: ‘cilk::reducer_opadd<Type>& cilk::reducer_opadd<Type>::operator=(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:256: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In static member function ‘static _BI2 std::__copy_backward<_BoolType, std::random_access_iterator_tag>::__copy_b(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*, bool _BoolType = false]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:465: instantiated from ‘_BI2 std::__copy_backward_aux(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:474: instantiated from ‘static _BI2 std::__copy_backward_normal<<anonymous>, <anonymous> >::__copy_b_n(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*, bool <anonymous> = false, bool <anonymous> = false]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:540: instantiated from ‘_BI2 std::copy_backward(_BI1, _BI1, _BI2) [with _BI1 = cilk::reducer_opadd<int>*, _BI2 = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:253: instantiated from ‘void std::vector<_Tp, _Alloc>::_M_insert_aux(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:230: error: ‘cilk::reducer_opadd<Type>& cilk::reducer_opadd<Type>::operator=(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_algobase.h:433: error: within this context /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h: In function ‘void std::_Construct(_T1*, const _T2&) [with _T1 = cilk::reducer_opadd<int>, _T2 = cilk::reducer_opadd<int>]’: /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_uninitialized.h:87: instantiated from ‘_ForwardIterator std::__uninitialized_copy_aux(_InputIterator, _InputIterator, _ForwardIterator, std::__false_type) [with _InputIterator = cilk::reducer_opadd<int>*, _ForwardIterator = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_uninitialized.h:114: instantiated from ‘_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = cilk::reducer_opadd<int>*, _ForwardIterator = cilk::reducer_opadd<int>*]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_uninitialized.h:254: instantiated from ‘_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, std::allocator<_Tp>) [with _InputIterator = cilk::reducer_opadd<int>*, _ForwardIterator = cilk::reducer_opadd<int>*, _Tp = cilk::reducer_opadd<int>]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/vector.tcc:275: instantiated from ‘void std::vector<_Tp, _Alloc>::_M_insert_aux(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_vector.h:605: instantiated from ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = cilk::reducer_opadd<int>, _Alloc = std::allocator<cilk::reducer_opadd<int> >]’ geneAttack.cilk:667: instantiated from here /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/cilk++/reducer_opadd.h:229: error: ‘cilk::reducer_opadd<Type>::reducer_opadd(const cilk::reducer_opadd<Type>&) [with Type = int]’ is private /usr/local/cilk/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/stl_construct.h:81: error: within this context make: *** [geneAttack] Error 1 jinchen@galactica:~/workspace/biometrics/genAttack$ make cilk++ geneAttack.cilk -O1 -g -lcilkutil -o geneAttack geneAttack.cilk: In function ‘int cilk cilk_main(int, char**)’: geneAttack.cilk:670: error: expected primary-expression before ‘,’ token geneAttack.cilk:670: error: expected primary-expression before ‘}’ token geneAttack.cilk:674: error: ‘bitmiss_vec’ was not declared in this scope make: *** [geneAttack] Error 1 The Cilk++ manule says it supports array/vector of reducers, although there are performance issues to consider: "If you create a large number of reducers (for example, an array or vector of reducers) you must be aware that there is an overhead at steal and reduce that is proportional to the number of reducers in the program. " Anyone knows what is going on? How should I declare/use vector of reducers? Thank you

    Read the article

  • During Spring unit test, data written to db but test not seeing the data

    - by richever
    I wrote a test case that extends AbstractTransactionalJUnit4SpringContextTests. The single test case I've written creates an instance of class User and attempts to write it to the database using Hibernate. The test code then uses SimpleJdbcTemplate to execute a simple select count(*) from the user table to determine if the user was persisted to the database or not. The test always fails though. I was suspect because in the Spring controller I wrote, the ability to save an instance of User to the db is successful. So I added the Rollback annotation to the unit test and sure enough, the data is written to the database since I can even see it in the appropriate table -- the transaction isn't rolled back when the test case is finished. Here's my test case: @ContextConfiguration(locations = { "classpath:context-daos.xml", "classpath:context-dataSource.xml", "classpath:context-hibernate.xml"}) public class UserDaoTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private UserDao userDao; @Test @Rollback(false) public void teseCreateUser() { try { UserModel user = randomUser(); String username = user.getUserName(); long id = userDao.create(user); String query = "select count(*) from public.usr where usr_name = '%s'"; long count = simpleJdbcTemplate.queryForLong(String.format(query, username)); Assert.assertEquals("User with username should be in the db", 1, count); } catch (Exception e) { e.printStackTrace(); Assert.assertNull("testCreateUser: " + e.getMessage()); } } } I think I was remiss by not adding the configuration files. context-hibernate.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd> <bean id="namingStrategy" class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean"> <property name="staticField"> <value>org.hibernate.cfg.ImprovedNamingStrategy.INSTANCE</value> </property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" destroy-method="destroy" scope="singleton"> <property name="namingStrategy"> <ref bean="namingStrategy"/> </property> <property name="dataSource" ref="dataSource"/> <property name="mappingResources"> <list> <value>com/company/model/usr.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.use_sql_comments">true</prop> <prop key="hibernate.query.substitutions">yes 'Y', no 'N'</prop> <prop key="hibernate.cache.provider_class">org.hibernate.cache.EhCacheProvider</prop> <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.cache.use_minimal_puts">false</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.current_session_context_class">thread</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory"/> <property name="nestedTransactionAllowed" value="false" /> </bean> <bean id="transactionInterceptor" class="org.springframework.transaction.interceptor.TransactionInterceptor"> <property name="transactionManager"> <ref local="transactionManager"/> </property> <property name="transactionAttributes"> <props> <prop key="create">PROPAGATION_REQUIRED</prop> <prop key="delete">PROPAGATION_REQUIRED</prop> <prop key="update">PROPAGATION_REQUIRED</prop> <prop key="*">PROPAGATION_SUPPORTS,readOnly</prop> </props> </property> </bean> </beans> context-dataSource.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="org.postgresql.Driver" /> <property name="jdbcUrl" value="jdbc\:postgresql\://localhost:5432/company_dev" /> <property name="user" value="postgres" /> <property name="password" value="postgres" /> </bean> </beans> context-daos.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="extendedFinderNamingStrategy" class="com.company.dao.finder.impl.ExtendedFinderNamingStrategy"/> <bean id="finderIntroductionAdvisor" class="com.company.dao.finder.impl.FinderIntroductionAdvisor"/> <bean id="abstractDaoTarget" class="com.company.dao.impl.GenericDaoHibernateImpl" abstract="true" depends-on="sessionFactory"> <property name="sessionFactory"> <ref bean="sessionFactory"/> </property> <property name="namingStrategy"> <ref bean="extendedFinderNamingStrategy"/> </property> </bean> <bean id="abstractDao" class="org.springframework.aop.framework.ProxyFactoryBean" abstract="true"> <property name="interceptorNames"> <list> <value>transactionInterceptor</value> <value>finderIntroductionAdvisor</value> </list> </property> </bean> <bean id="userDao" parent="abstractDao"> <property name="proxyInterfaces"> <value>com.company.dao.UserDao</value> </property> <property name="target"> <bean parent="abstractDaoTarget"> <constructor-arg> <value>com.company.model.UserModel</value> </constructor-arg> </bean> </property> </bean> </beans> Some of this I've inherited from someone else. I wouldn't have used the proxying that is going on here because I'm not sure it's needed but this is what I'm working with. Any help much appreciated.

    Read the article

  • How do I get my dependenices inject using @Configurable in conjunction with readResolve()

    - by bmatthews68
    The framework I am developing for my application relies very heavily on dynamically generated domain objects. I recently started using Spring WebFlow and now need to be able to serialize my domain objects that will be kept in flow scope. I have done a bit of research and figured out that I can use writeReplace() and readResolve(). The only catch is that I need to look-up a factory in the Spring context. I tried to use @Configurable(preConstruction = true) in conjunction with the BeanFactoryAware marker interface. But beanFactory is always null when I try to use it in my createEntity() method. Neither the default constructor nor the setBeanFactory() injector are called. Has anybody tried this or something similar? I have included relevant class below. Thanks in advance, Brian /* * Copyright 2008 Brian Thomas Matthews Limited. * All rights reserved, worldwide. * * This software and all information contained herein is the property of * Brian Thomas Matthews Limited. Any dissemination, disclosure, use, or * reproduction of this material for any reason inconsistent with the * express purpose for which it has been disclosed is strictly forbidden. */ package com.btmatthews.dmf.domain.impl.cglib; import java.io.InvalidObjectException; import java.io.ObjectStreamException; import java.io.Serializable; import java.lang.reflect.InvocationTargetException; import java.util.HashMap; import java.util.Map; import org.apache.commons.beanutils.PropertyUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.BeanFactory; import org.springframework.beans.factory.BeanFactoryAware; import org.springframework.beans.factory.annotation.Configurable; import org.springframework.util.StringUtils; import com.btmatthews.dmf.domain.IEntity; import com.btmatthews.dmf.domain.IEntityFactory; import com.btmatthews.dmf.domain.IEntityID; import com.btmatthews.dmf.spring.IEntityDefinitionBean; /** * This class represents the serialized form of a domain object implemented * using CGLib. The readResolve() method recreates the actual domain object * after it has been deserialized into Serializable. You must define * &lt;spring-configured/&gt; in the application context. * * @param <S> * The interface that defines the properties of the base domain * object. * @param <T> * The interface that defines the properties of the derived domain * object. * @author <a href="mailto:[email protected]">Brian Matthews</a> * @version 1.0 */ @Configurable(preConstruction = true) public final class SerializedCGLibEntity<S extends IEntity<S>, T extends S> implements Serializable, BeanFactoryAware { /** * Used for logging. */ private static final Logger LOG = LoggerFactory .getLogger(SerializedCGLibEntity.class); /** * The serialization version number. */ private static final long serialVersionUID = 3830830321957878319L; /** * The application context. Note this is not serialized. */ private transient BeanFactory beanFactory; /** * The domain object name. */ private String entityName; /** * The domain object identifier. */ private IEntityID<S> entityId; /** * The domain object version number. */ private long entityVersion; /** * The attributes of the domain object. */ private HashMap<?, ?> entityAttributes; /** * The default constructor. */ public SerializedCGLibEntity() { SerializedCGLibEntity.LOG .debug("Initializing with default constructor"); } /** * Initialise with the attributes to be serialised. * * @param name * The entity name. * @param id * The domain object identifier. * @param version * The entity version. * @param attributes * The entity attributes. */ public SerializedCGLibEntity(final String name, final IEntityID<S> id, final long version, final HashMap<?, ?> attributes) { SerializedCGLibEntity.LOG .debug("Initializing with parameterized constructor"); this.entityName = name; this.entityId = id; this.entityVersion = version; this.entityAttributes = attributes; } /** * Inject the bean factory. * * @param factory * The bean factory. */ public void setBeanFactory(final BeanFactory factory) { SerializedCGLibEntity.LOG.debug("Injected bean factory"); this.beanFactory = factory; } /** * Called after deserialisation. The corresponding entity factory is * retrieved from the bean application context and BeanUtils methods are * used to initialise the object. * * @return The initialised domain object. * @throws ObjectStreamException * If there was a problem creating or initialising the domain * object. */ public Object readResolve() throws ObjectStreamException { SerializedCGLibEntity.LOG.debug("Transforming deserialized object"); final T entity = this.createEntity(); entity.setId(this.entityId); try { PropertyUtils.setSimpleProperty(entity, "version", this.entityVersion); for (Map.Entry<?, ?> entry : this.entityAttributes.entrySet()) { PropertyUtils.setSimpleProperty(entity, entry.getKey() .toString(), entry.getValue()); } } catch (IllegalAccessException e) { throw new InvalidObjectException(e.getMessage()); } catch (InvocationTargetException e) { throw new InvalidObjectException(e.getMessage()); } catch (NoSuchMethodException e) { throw new InvalidObjectException(e.getMessage()); } return entity; } /** * Lookup the entity factory in the application context and create an * instance of the entity. The entity factory is located by getting the * entity definition bean and using the factory registered with it or * getting the entity factory. The name used for the definition bean lookup * is ${entityName}Definition while ${entityName} is used for the factory * lookup. * * @return The domain object instance. * @throws ObjectStreamException * If the entity definition bean or entity factory were not * available. */ @SuppressWarnings("unchecked") private T createEntity() throws ObjectStreamException { SerializedCGLibEntity.LOG.debug("Getting domain object factory"); // Try to use the entity definition bean final IEntityDefinitionBean<S, T> entityDefinition = (IEntityDefinitionBean<S, T>)this.beanFactory .getBean(StringUtils.uncapitalize(this.entityName) + "Definition", IEntityDefinitionBean.class); if (entityDefinition != null) { final IEntityFactory<S, T> entityFactory = entityDefinition .getFactory(); if (entityFactory != null) { SerializedCGLibEntity.LOG .debug("Domain object factory obtained via enity definition bean"); return entityFactory.create(); } } // Try to use the entity factory final IEntityFactory<S, T> entityFactory = (IEntityFactory<S, T>)this.beanFactory .getBean(StringUtils.uncapitalize(this.entityName) + "Factory", IEntityFactory.class); if (entityFactory != null) { SerializedCGLibEntity.LOG .debug("Domain object factory obtained via direct look-up"); return entityFactory.create(); } // Neither worked! SerializedCGLibEntity.LOG.warn("Cannot find domain object factory"); throw new InvalidObjectException( "No entity definition or factory found for " + this.entityName); } }

    Read the article

  • Spring's EntityManager not persisting

    - by Fernando Camargo
    Well, my project was using EJB and JPA (with Hibernate), but I had to switch to Spring. Everything was working well before that. The EJB used to inject the EntityManager, controled the transaction, etc. Ok, when I switched to Spring, I had a lot of problems because I'm new on Spring. But after everything is running, I have the problem: the data is never saved on database. I configured my Spring to control the transactions, I have spring beans used in JSF, that has spring services that do the hard work. This services have a EntityManager injected and use @Transactional REQUIRED. This services pass the EntityManager to a DAO that call entityManager.persist(bean). The selects appears to work well, the JTA transaction appears to work well to (I saw in log), but the entity is not saved! Here is the log: INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.jpa.support.OpenEntityManagerInViewFilter: doFilterInternal() (linha 136): Opening JPA EntityManager in OpenEntityManagerInViewFilter INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.beans.factory.support.DefaultListableBeanFactory: doGetBean() (linha 245): Returning cached instance of singleton bean 'transactionManager' INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: getTransaction() (linha 365): Creating new transaction with name [br.org.cni.pronatec.controller.service.MontanteServiceImpl.adicionarValor]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; '' INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: doBegin() (linha 493): Opened new Session [org.hibernate.impl.SessionImpl@2b2fe2f0] for Hibernate transaction INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: doBegin() (linha 504): Preparing JDBC Connection of Hibernate Session [org.hibernate.impl.SessionImpl@2b2fe2f0] INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: doBegin() (linha 569): Exposing Hibernate transaction as JDBC transaction [com.sun.gjc.spi.jdbc40.ConnectionHolder40@3bcd4840] INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerInvocationHandler: doJoinTransaction() (linha 383): Joined JTA transaction INFO: Hibernate: select hibernate_sequence.nextval from dual INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: processCommit() (linha 752): Initiating transaction commit INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: doCommit() (linha 652): Committing Hibernate transaction on Session [org.hibernate.impl.SessionImpl@2b2fe2f0] INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: doCleanupAfterCompletion() (linha 734): Closing Hibernate Session [org.hibernate.impl.SessionImpl@2b2fe2f0] after transaction INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.SessionFactoryUtils: closeSession() (linha 800): Closing Hibernate Session INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.jpa.support.OpenEntityManagerInViewFilter: doFilterInternal() (linha 154): Closing JPA EntityManager in OpenEntityManagerInViewFilter INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.jpa.EntityManagerFactoryUtils: closeEntityManager() (linha 343): Closing JPA EntityManager In the log, I see it commiting the transaction, but I don't see the insert query (the Hibernate is printing any query). I also see that the Hibernate lookup to get the next value of the sequence ID. But after that, it never really inserts. Here is the spring context configuration: <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="PronatecPU" /> <property name="persistenceXmlLocation" value="classpath:META-INF/persistence.xml" /> <property name="loadTimeWeaver"> <bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver"/> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">org.hibernate.transaction.JTATransactionFactory</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager" > <property name="transactionManagerName" value="java:/TransactionManager" /> <property name="userTransactionName" value="UserTransaction" /> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> <tx:annotation-driven transaction-manager="transactionManager" /> Here is my persistence.xml: <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="PronatecPU" transaction-type="JTA"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>jdbc/pronatec</jta-data-source> <class>br.org.cni.pronatec.model.bean.AgendamentoBuscaSistec</class> <class>br.org.cni.pronatec.model.bean.AgendamentoExportacaoZeus</class> <class>br.org.cni.pronatec.model.bean.AgendamentoImportacaoZeus</class> <class>br.org.cni.pronatec.model.bean.Aluno</class> <class>br.org.cni.pronatec.model.bean.Curso</class> <class>br.org.cni.pronatec.model.bean.DepartamentoRegional</class> <class>br.org.cni.pronatec.model.bean.Dof</class> <class>br.org.cni.pronatec.model.bean.Escola</class> <class>br.org.cni.pronatec.model.bean.Inconsistencia</class> <class>br.org.cni.pronatec.model.bean.Matricula</class> <class>br.org.cni.pronatec.model.bean.Montante</class> <class>br.org.cni.pronatec.model.bean.ParametrosVingentes</class> <class>br.org.cni.pronatec.model.bean.TipoCurso</class> <class>br.org.cni.pronatec.model.bean.Turma</class> <class>br.org.cni.pronatec.model.bean.UnidadeFederativa</class> <class>br.org.cni.pronatec.model.bean.ValorAssistenciaEstudantil</class> <class>br.org.cni.pronatec.model.bean.ValorHora</class> <exclude-unlisted-classes>true</exclude-unlisted-classes> <properties> <property name="current_session_context_class" value="thread"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.format_sql" value="true"/> <property name="hibernate.dialect" value="org.hibernate.dialect.OracleDialect"/> <property name="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.SunONETransactionManagerLookup"/> <property name="hibernate.hbm2ddl.auto" value="update"/> </properties> </persistence-unit> </persistence> Here is my service that is injected in the managed bean: @Service @Scope("prototype") @Transactional(propagation= Propagation.REQUIRED) public class MontanteServiceImpl { // more code @PersistenceContext(unitName="PronatecPU", type= PersistenceContextType.EXTENDED) private EntityManager entityManager; // more code // The method that is called by another public method that do something before private void salvarMontante(Montante montante) { montante.setDataTransacao(new Date()); MontanteDao montanteDao = new MontanteDao(entityManager); montanteDao.salvar(montante); } // more code } My MontanteDao inherits from a base DAO, like this: public class MontanteDao extends BaseDao<Montante> { public MontanteDao(EntityManager entityManager) { super(entityManager); } } And the method that is called in BaseDao is this: public void salvar(T bean) { entityManager.persist(bean); } Like you can see, it just pick the injected entityManager and call the persist() method. The transaction is being controlled by the Spring, like is printed in the log, but the insert query is never printed in log and it is never saved. I'm sorry about my bad english. Thanks in advance for who helps.

    Read the article

  • Problems using Hibernate and Spring in web application

    - by user628480
    Hi.I'm having NullPointerException trying to getCurrentSession() java.lang.NullPointerException servlets.ControlServlet.doPost(ControlServlet.java:46) javax.servlet.http.HttpServlet.service(HttpServlet.java:709) javax.servlet.http.HttpServlet.service(HttpServlet.java:802) I use Tomcat 5.5 index.jsp page: <%@ page language="java" contentType="text/html; charset=ISO-8859-1"pageEncoding="ISO-8859-1"%> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title></title> </head> <body> <%@ page import="java.util.List" %> <%@ page import="data.Singer" %> <jsp:useBean id="singer" class="data.Singer" scope="session"/> <jsp:setProperty name="singer" property="*" /> <form action="ControlServlet" method="POST"> <form method=“POST”> Name:<br /> <input type=“text” name="name" /><br /> Type:<br /> <input type=“text” name="type" /><br /> <input type="submit" name="Add song" value="Add song"> <input type="submit" name="save" value="Save" /><br><br> <input type ="submit" name="values" value="Get values" > </form> </body> </html> web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app id="WebApp_ID" version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <display-name>webproject</display-name> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/beans.xml, /WEB-INF/conf.xml, /WEB-INF/singers.hbm.xml, /WEB-INF/songs.hbm.xml, /WEB-INF/singerbeans.xml, /WEB-INF/songbeans.xml</param-value> </context-param> <servlet> <servlet-name>context</servlet-name> <servlet-class>org.springframework.web.context.ContextLoaderServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet> <servlet-name>test</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>test</servlet-name> <url-pattern>*.*</url-pattern> </servlet-mapping> <servlet> <servlet-name>action</servlet-name> <servlet-class>org.apache.struts.action.ActionServlet</servlet-class> <init-param> <param-name>config</param-name> <param-value>/WEB-INF/beans.xml, /WEB-INF/conf.xml, /WEB-INF/singers.hbm.xml, /WEB-INF/songs.hbm.xml, /WEB-INF/singerbeans.xml, /WEB-INF/songbeans.xml</param-value> </init-param> <init-param> <param-name>debug</param-name> <param-value>2</param-value> </init-param> <init-param> <param-name>detail</param-name> <param-value>2</param-value> </init-param> <load-on-startup>2</load-on-startup> </servlet> <servlet> <description> </description> <display-name>ControlServlet</display-name> <servlet-name>ControlServlet</servlet-name> <servlet-class>servlets.ControlServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>action</servlet-name> <url-pattern>*.*</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>ControlServlet</servlet-name> <url-pattern>/ControlServlet</url-pattern> </servlet-mapping> </web-app> ControlServlet.java public class ControlServlet extends HttpServlet { private static final long serialVersionUID = 1L; @Autowired private SingerDao singerdao; public SingerDao getSingerDao() { return singerdao; } public void setSingerDao(SingerDao singerdao) { this.singerdao = singerdao; } public ControlServlet() { super(); // TODO Auto-generated constructor stub } protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // TODO Auto-generated method stub } protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { if (request.getParameter("values") != null) { response.getWriter().println(singerdao.getDBValues()); } } } and SingerDao.java public class SingerDao implements SingerDaoInterface { @Autowired private SessionFactory sessionFactory; public void setSessionFactory(SessionFactory sessionFactory) { this.sessionFactory = sessionFactory; } public List getDBValues() { Session session = getCurrentSession(); List<Singer> singers = session.createCriteria(Singer.class).list(); return singers; } private org.hibernate.classic.Session getCurrentSession() { return sessionFactory.getCurrentSession(); } public void updateSinger(Singer singer) { Session session = getCurrentSession(); session.update(singer); } public Singer getSinger(int id) { Singer singer = null; Session session = getCurrentSession(); singer = (Singer) session.load(Singer.class, id); return singer; } public void deleteSinger(Singer singer) { Session session = getCurrentSession(); session.delete(singer); } public void insertRow(Singer singer) { Session session = getCurrentSession(); session.save(singer); } } In simple Java Project it works fine.I think sessionFactory doesn't autowires,but why? Thanks all.

    Read the article

  • Reading input from a text file, omits the first and adds a nonsense value to the end?

    - by Greenhouse Gases
    Hi there When I input locations from a txt file I am getting a peculiar error where it seems to miss off the first entry, yet add a garbage entry to the end of the link list (it is designed to take the name, latitude and longitude for each location you will notice). I imagine this to be an issue with where it starts collecting the inputs and where it stops but I cant find the error!! It reads the first line correctly but then skips to the next before adding it because during testing for the bug it had no record of the first location Lisbon though whilst stepping into the method call it was reading it. Very bizarre but hopefully someone knows the issue. Here is firstly my header file: #include <string> struct locationNode { char nodeCityName [35]; double nodeLati; double nodeLongi; locationNode* Next; void CorrectCase() // Correct upper and lower case letters of input { int MAX_SIZE = 35; int firstLetVal = this->nodeCityName[0], letVal; int n = 1; // variable for name index from second letter onwards if((this->nodeCityName[0] >90) && (this->nodeCityName[0] < 123)) // First letter is lower case { firstLetVal = firstLetVal - 32; // Capitalise first letter this->nodeCityName[0] = firstLetVal; } while(n <= MAX_SIZE - 1) { if((this->nodeCityName[n] >= 65) && (this->nodeCityName[n] <= 90)) { letVal = this->nodeCityName[n] + 32; this->nodeCityName[n] = letVal; } n++; } //cityNameInput = this->nodeCityName; } }; class Locations { private: int size; public: Locations(){ }; // constructor for the class locationNode* Head; //int Add(locationNode* Item); }; And here is the file containing main: // U08221.cpp : main project file. #include "stdafx.h" #include "Locations.h" #include <iostream> #include <string> #include <fstream> using namespace std; int n = 0,x, locationCount = 0, MAX_SIZE = 35; string cityNameInput; char targetCity[35]; bool acceptedInput = false, userInputReq = true, match = false, nodeExists = false;// note: addLocation(), set to true to enable user input as opposed to txt file locationNode *start_ptr = NULL; // pointer to first entry in the list locationNode *temp, *temp2; // Part is a pointer to a new locationNode we can assign changing value followed by a call to Add locationNode *seek, *bridge; void setElementsNull(char cityParam[]) { int y=0, count =0; while(cityParam[y] != NULL) { y++; } while(y < MAX_SIZE) { cityParam[y] = NULL; y++; } } void addLocation() { temp = new locationNode; // declare the space for a pointer item and assign a temporary pointer to it if(!userInputReq) // bool that determines whether user input is required in adding the node to the list { cout << endl << "Enter the name of the location: "; cin >> temp->nodeCityName; temp->CorrectCase(); setElementsNull(temp->nodeCityName); cout << endl << "Please enter the latitude value for this location: "; cin >> temp->nodeLati; cout << endl << "Please enter the longitude value for this location: "; cin >> temp->nodeLongi; cout << endl; } temp->Next = NULL; //set to NULL as when one is added it is currently the last in the list and so can not point to the next if(start_ptr == NULL){ // if list is currently empty, start_ptr will point to this node start_ptr = temp; } else { temp2 = start_ptr; // We know this is not NULL - list not empty! while (temp2->Next != NULL) { temp2 = temp2->Next; // Move to next link in chain until reach end of list } temp2->Next = temp; } ++locationCount; // increment counter for number of records in list if(!userInputReq){ cout << "Location sucessfully added to the database! There are " << locationCount << " location(s) stored" << endl; } } void populateList(){ ifstream inputFile; inputFile.open ("locations.txt", ios::in); userInputReq = true; temp = new locationNode; // declare the space for a pointer item and assign a temporary pointer to it do { inputFile.get(temp->nodeCityName, 35, ' '); setElementsNull(temp->nodeCityName); inputFile >> temp->nodeLati; inputFile >> temp->nodeLongi; setElementsNull(temp->nodeCityName); if(temp->nodeCityName[0] == 10) //remove linefeed from input { for(int i = 0; temp->nodeCityName[i] != NULL; i++) { temp->nodeCityName[i] = temp->nodeCityName[i + 1]; } } addLocation(); } while(!inputFile.eof()); userInputReq = false; cout << "Successful!" << endl << "List contains: " << locationCount << " entries" << endl; cout << endl; inputFile.close(); } bool nodeExistTest(char targetCity[]) // see if entry is present in the database { match = false; seek = start_ptr; int letters = 0, letters2 = 0, x = 0, y = 0; while(targetCity[y] != NULL) { letters2++; y++; } while(x <= locationCount) // locationCount is number of entries currently in list { y=0, letters = 0; while(seek->nodeCityName[y] != NULL) // count letters in the current name { letters++; y++; } if(letters == letters2) // same amount of letters in the name { y = 0; while(y <= letters) // compare each letter against one another { if(targetCity[y] == seek->nodeCityName[y]) { match = true; y++; } else { match = false; y = letters + 1; // no match, terminate comparison } } } if(match) { x = locationCount + 1; //found match so terminate loop } else{ if(seek->Next != NULL) { bridge = seek; seek = seek->Next; x++; } else { x = locationCount + 1; // end of list so terminate loop } } } return match; } void deleteRecord() // complete this { int junction = 0; locationNode *place; cout << "Enter the name of the city you wish to remove" << endl; cin >> targetCity; setElementsNull(targetCity); if(nodeExistTest(targetCity)) //if this node does exist { if(seek == start_ptr) // if it is the first in the list { junction = 1; } if(seek != start_ptr && seek->Next == NULL) // if it is last in the list { junction = 2; } switch(junction) // will alter list accordingly dependant on where the searched for link is { case 1: start_ptr = start_ptr->Next; delete seek; --locationCount; break; case 2: place = seek; seek = bridge; delete place; --locationCount; break; default: bridge->Next = seek->Next; delete seek; --locationCount; break; } } else { cout << targetCity << "That entry does not currently exist" << endl << endl << endl; } } void searchDatabase() { char choice; cout << "Enter search term..." << endl; cin >> targetCity; if(nodeExistTest(targetCity)) { cout << "Entry: " << endl << endl; } else { cout << "Sorry, that city is not currently present in the list." << endl << "Would you like to add this city now Y/N?" << endl; cin >> choice; /*while(choice != ('Y' || 'N')) { cout << "Please enter a valid choice..." << endl; cin >> choice; }*/ switch(choice) { case 'Y': addLocation(); break; case 'N': break; default : cout << "Invalid choice" << endl; break; } } } void printDatabase() { temp = start_ptr; // set temp to the start of the list do { if (temp == NULL) { cout << "You have reached the end of the database" << endl; } else { // Display details for what temp points to at that stage cout << "Location : " << temp->nodeCityName << endl; cout << "Latitude : " << temp->nodeLati << endl; cout << "Longitude : " << temp->nodeLongi << endl; cout << endl; // Move on to next locationNode if one exists temp = temp->Next; } } while (temp != NULL); } void nameValidation(string name) { n = 0; // start from first letter x = name.size(); while(!acceptedInput) { if((name[n] >= 65) && (name[n] <= 122)) // is in the range of letters { while(n <= x - 1) { while((name[n] >=91) && (name[n] <=97)) // ERROR!! { cout << "Please enter a valid city name" << endl; cin >> name; } n++; } } else { cout << "Please enter a valid city name" << endl; cin >> name; } if(n <= x - 1) { acceptedInput = true; } } cityNameInput = name; } int main(array<System::String ^> ^args) { //main contains test calls to functions at present cout << "Populating list..."; populateList(); printDatabase(); deleteRecord(); printDatabase(); cin >> cityNameInput; } The text file contains this (ignore the names, they are just for testing!!): Lisbon 45 47 Fattah 45 47 Darius 42 49 Peter 45 27 Sarah 85 97 Michelle 45 47 John 25 67 Colin 35 87 Shiron 40 57 George 34 45 Sean 22 33 The output omits Lisbon, but adds on a garbage entry with nonsense values. Any ideas why? Thank you in advance.

    Read the article

  • Building applications with WPF, MVVM and Prism(aka CAG)

    - by skjagini
    In this article I am going to walk through an application using WPF and Prism (aka composite application guidance, CAG) which simulates engaging a taxi (cab).  The rules are simple, the app would have3 screens A login screen to authenticate the user An information screen. A screen to engage the cab and roam around and calculating the total fare Metered Rate of Fare The meter is required to be engaged when a cab is occupied by anyone $3.00 upon entry $0.35 for each additional unit The unit fare is: one-fifth of a mile, when the cab is traveling at 6 miles an hour or more; or 60 seconds when not in motion or traveling at less than 12 miles per hour. Night surcharge of $.50 after 8:00 PM & before 6:00 AM Peak hour Weekday Surcharge of $1.00 Monday - Friday after 4:00 PM & before 8:00 PM New York State Tax Surcharge of $.50 per ride. Example: Friday (2010-10-08) 5:30pm Start at Lexington Ave & E 57th St End at Irving Pl & E 15th St Start = $3.00 Travels 2 miles at less than 6 mph for 15 minutes = $3.50 Travels at more than 12 mph for 5 minutes = $1.75 Peak hour Weekday Surcharge = $1.00 (ride started at 5:30 pm) New York State Tax Surcharge = $0.50 Before we dive into the app, I would like to give brief description about the framework.  If you want to jump on to the source code, scroll all the way to the end of the post. MVVM MVVM pattern is in no way related to the usage of PRISM in your application and should be considered if you are using WPF irrespective of PRISM or not. Lets say you are not familiar with MVVM, your typical UI would involve adding some UI controls like text boxes, a button, double clicking on the button,  generating event handler, calling a method from business layer and updating the user interface, it works most of the time for developing small scale applications. The problem with this approach is that there is some amount of code specific to business logic wrapped in UI specific code which is hard to unit test it, mock it and MVVM helps to solve the exact problem. MVVM stands for Model(M) – View(V) – ViewModel(VM),  based on the interactions with in the three parties it should be called VVMM,  MVVM sounds more like MVC (Model-View-Controller) so the name. Why it should be called VVMM: View – View Model - Model WPF allows to create user interfaces using XAML and MVVM takes it to the next level by allowing complete separation of user interface and business logic. In WPF each view will have a property, DataContext when set to an instance of a class (which happens to be your view model) provides the data the view is interested in, i.e., view interacts with view model and at the same time view model interacts with view through DataContext. Sujith, if view and view model are interacting directly with each other how does MVVM is helping me separation of concerns? Well, the catch is DataContext is of type Object, since it is of type object view doesn’t know exact type of view model allowing views and views models to be loosely coupled. View models aggregate data from models (data access layer, services, etc) and make it available for views through properties, methods etc, i.e., View Models interact with Models. PRISM Prism is provided by Microsoft Patterns and Practices team and it can be downloaded from codeplex for source code,  samples and documentation on msdn.  The name composite implies, to compose user interface from different modules (views) without direct dependencies on each other, again allowing  loosely coupled development. Well Sujith, I can already do that with user controls, why shall I learn another framework?  That’s correct, you can decouple using user controls, but you still have to manage some amount of coupling, like how to do you communicate between the controls, how do you subscribe/unsubscribe, loading/unloading views dynamically. Prism is not a replacement for user controls, provides the following features which greatly help in designing the composite applications. Dependency Injection (DI)/ Inversion of Control (IoC) Modules Regions Event Aggregator  Commands Simply put, MVVM helps building a single view and Prism helps building an application using the views There are other open source alternatives to Prism, like MVVMLight, Cinch, take a look at them as well. Lets dig into the source code.  1. Solution The solution is made of the following projects Framework: Holds the common functionality in building applications using WPF and Prism TaxiClient: Start up project, boot strapping and app styling TaxiCommon: Helps with the business logic TaxiModules: Holds the meat of the application with views and view models TaxiTests: To test the application 2. DI / IoC Dependency Injection (DI) as the name implies refers to injecting dependencies and Inversion of Control (IoC) means the calling code has no direct control on the dependencies, opposite of normal way of programming where dependencies are passed by caller, i.e inversion; aside from some differences in terminology the concept is same in both the cases. The idea behind DI/IoC pattern is to reduce the amount of direct coupling between different components of the application, the higher the dependency the more tightly coupled the application resulting in code which is hard to modify, unit test and mock.  Initializing Dependency Injection through BootStrapper TaxiClient is the starting project of the solution and App (App.xaml)  is the starting class that gets called when you run the application. From the App’s OnStartup method we will invoke BootStrapper.   namespace TaxiClient { /// <summary> /// Interaction logic for App.xaml /// </summary> public partial class App : Application { protected override void OnStartup(StartupEventArgs e) { base.OnStartup(e);   (new BootStrapper()).Run(); } } } BootStrapper is your contact point for initializing the application including dependency injection, creating Shell and other frameworks. We are going to use Unity for DI and there are lot of open source DI frameworks like Spring.Net, StructureMap etc with different feature set  and you can choose a framework based on your preferences. Note that Prism comes with in built support for Unity, for example we are deriving from UnityBootStrapper in our case and for any other DI framework you have to extend the Prism appropriately   namespace TaxiClient { public class BootStrapper: UnityBootstrapper { protected override IModuleCatalog CreateModuleCatalog() { return new ConfigurationModuleCatalog(); } protected override DependencyObject CreateShell() { Framework.FrameworkBootStrapper.Run(Container, Application.Current.Dispatcher);   Shell shell = new Shell(); shell.ResizeMode = ResizeMode.NoResize; shell.Show();   return shell; } } } Lets take a look into  FrameworkBootStrapper to check out how to register with unity container. namespace Framework { public class FrameworkBootStrapper { public static void Run(IUnityContainer container, Dispatcher dispatcher) { UIDispatcher uiDispatcher = new UIDispatcher(dispatcher); container.RegisterInstance<IDispatcherService>(uiDispatcher);   container.RegisterType<IInjectSingleViewService, InjectSingleViewService>( new ContainerControlledLifetimeManager());   . . . } } } In the above code we are registering two components with unity container. You shall observe that we are following two different approaches, RegisterInstance and RegisterType.  With RegisterInstance we are registering an existing instance and the same instance will be returned for every request made for IDispatcherService   and with RegisterType we are requesting unity container to create an instance for us when required, i.e., when I request for an instance for IInjectSingleViewService, unity will create/return an instance of InjectSingleViewService class and with RegisterType we can configure the life time of the instance being created. With ContaienrControllerLifetimeManager, the unity container caches the instance and reuses for any subsequent requests, without recreating a new instance. Lets take a look into FareViewModel.cs and it’s constructor. The constructor takes one parameter IEventAggregator and if you try to find all references in your solution for IEventAggregator, you will not find a single location where an instance of EventAggregator is passed directly to the constructor. The compiler still finds an instance and works fine because Prism is already configured when used with Unity container to return an instance of EventAggregator when requested for IEventAggregator and in this particular case it is called constructor injection. public class FareViewModel:ObservableBase, IDataErrorInfo { ... private IEventAggregator _eventAggregator;   public FareViewModel(IEventAggregator eventAggregator) { _eventAggregator = eventAggregator; InitializePropertyNames(); InitializeModel(); PropertyChanged += OnPropertyChanged; } ... 3. Shell Shells are very similar in operation to Master Pages in asp.net or MDI in Windows Forms. And shells contain regions which display the views, you can have as many regions as you wish in a given view. You can also nest regions. i.e, one region can load a view which in itself may contain other regions. We have to create a shell at the start of the application and are doing it by overriding CreateShell method from BootStrapper From the following Shell.xaml you shall notice that we have two content controls with Region names as ‘MenuRegion’ and ‘MainRegion’.  The idea here is that you can inject any user controls into the regions dynamically, i.e., a Menu User Control for MenuRegion and based on the user action you can load appropriate view into MainRegion.    <Window x:Class="TaxiClient.Shell" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Regions="clr-namespace:Microsoft.Practices.Prism.Regions;assembly=Microsoft.Practices.Prism" Title="Taxi" Height="370" Width="800"> <Grid Margin="2"> <ContentControl Regions:RegionManager.RegionName="MenuRegion" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" />   <ContentControl Grid.Row="1" Regions:RegionManager.RegionName="MainRegion" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" /> <!--<Border Grid.ColumnSpan="2" BorderThickness="2" CornerRadius="3" BorderBrush="LightBlue" />-->   </Grid> </Window> 4. Modules Prism provides the ability to build composite applications and modules play an important role in it. For example if you are building a Mortgage Loan Processor application with 3 components, i.e. customer’s credit history,  existing mortgages, new home/loan information; and consider that the customer’s credit history component involves gathering data about his/her address, background information, job details etc. The idea here using Prism modules is to separate the implementation of these 3 components into their own visual studio projects allowing to build components with no dependency on each other and independently. If we need to add another component to the application, the component can be developed by in house team or some other team in the organization by starting with a new Visual Studio project and adding to the solution at the run time with very little knowledge about the application. Prism modules are defined by implementing the IModule interface and each visual studio project to be considered as a module should implement the IModule interface.  From the BootStrapper.cs you shall observe that we are overriding the method by returning a ConfiguratingModuleCatalog which returns the modules that are registered for the application using the app.config file  and you can also add module using code. Lets take a look into configuration file.   <?xml version="1.0"?> <configuration> <configSections> <section name="modules" type="Microsoft.Practices.Prism.Modularity.ModulesConfigurationSection, Microsoft.Practices.Prism"/> </configSections> <modules> <module assemblyFile="TaxiModules.dll" moduleType="TaxiModules.ModuleInitializer, TaxiModules" moduleName="TaxiModules"/> </modules> </configuration> Here we are adding TaxiModules project to our solution and TaxiModules.ModuleInitializer implements IModule interface   5. Module Mapper With Prism modules you can dynamically add or remove modules from the regions, apart from that Prism also provides API to control adding/removing the views from a region within the same module. Taxi Information Screen: Engage the Taxi Screen: The sample application has two screens, ‘Taxi Information’ and ‘Engage the Taxi’ and they both reside in same module, TaxiModules. ‘Engage the Taxi’ is again made of two user controls, FareView on the left and TotalView on the right. We have created a Shell with two regions, MenuRegion and MainRegion with menu loaded into MenuRegion. We can create a wrapper user control called EngageTheTaxi made of FareView and TotalView and load either TaxiInfo or EngageTheTaxi into MainRegion based on the user action. Though it will work it tightly binds the user controls and for every combination of user controls, we need to create a dummy wrapper control to contain them. Instead we can apply the principles we learned so far from Shell/regions and introduce another template (LeftAndRightRegionView.xaml) made of two regions Region1 (left) and Region2 (right) and load  FareView and TotalView dynamically.  To help with loading of the views dynamically I have introduce an helper an interface, IInjectSingleViewService,  idea suggested by Mike Taulty, a must read blog for .Net developers. using System; using System.Collections.Generic; using System.ComponentModel;   namespace Framework.PresentationUtility.Navigation {   public interface IInjectSingleViewService : INotifyPropertyChanged { IEnumerable<CommandViewDefinition> Commands { get; } IEnumerable<ModuleViewDefinition> Modules { get; }   void RegisterViewForRegion(string commandName, string viewName, string regionName, Type viewType); void ClearViewFromRegion(string viewName, string regionName); void RegisterModule(string moduleName, IList<ModuleMapper> moduleMappers); } } The Interface declares three methods to work with views: RegisterViewForRegion: Registers a view with a particular region. You can register multiple views and their regions under one command.  When this particular command is invoked all the views registered under it will be loaded into their regions. ClearViewFromRegion: To unload a specific view from a region. RegisterModule: The idea is when a command is invoked you can load the UI with set of controls in their default position and based on the user interaction, you can load different contols in to different regions on the fly.  And it is supported ModuleViewDefinition and ModuleMappers as shown below. namespace Framework.PresentationUtility.Navigation { public class ModuleViewDefinition { public string ModuleName { get; set; } public IList<ModuleMapper> ModuleMappers; public ICommand Command { get; set; } }   public class ModuleMapper { public string ViewName { get; set; } public string RegionName { get; set; } public Type ViewType { get; set; } } } 6. Event Aggregator Prism event aggregator enables messaging between components as in Observable pattern, Notifier notifies the Observer which receives notification it is interested in. When it comes to Observable pattern, Observer has to unsubscribes for notifications when it no longer interested in notifications, which allows the Notifier to remove the Observer’s reference from it’s local cache. Though .Net has managed garbage collection it cannot remove inactive the instances referenced by an active instance resulting in memory leak, keeping the Observers in memory as long as Notifier stays in memory.  Developers have to be very careful to unsubscribe when necessary and it often gets overlooked, to overcome these problems Prism Event Aggregator uses weak references to cache the reference (Observer in this case)  and releases the reference (memory) once the instance goes out of scope. Using event aggregator is very simple, declare a generic type of CompositePresenationEvent by inheriting from it. using Microsoft.Practices.Prism.Events; using TaxiCommon.BAO;   namespace TaxiCommon.CompositeEvents { public class TaxiOnMoveEvent:CompositePresentationEvent<TaxiOnMove> { } }   TaxiOnMove.cs includes the properties which we want to exchange between the parties, FareView and TotalView. using System;   namespace TaxiCommon.BAO { public class TaxiOnMove { public TimeSpan MinutesAtTweleveMPH { get; set; } public double MilesAtSixMPH { get; set; } } }   Lets take a look into FareViewodel (Notifier) and how it raises the event.  Here we are raising the event by getting the event through GetEvent<..>() and publishing it with the payload private void OnAddMinutes(object obj) { TaxiOnMove payload = new TaxiOnMove(); if(MilesAtSixMPH != null) payload.MilesAtSixMPH = MilesAtSixMPH.Value; if(MinutesAtTweleveMPH != null) payload.MinutesAtTweleveMPH = new TimeSpan(0,0,MinutesAtTweleveMPH.Value,0);   _eventAggregator.GetEvent<TaxiOnMoveEvent>().Publish(payload); ResetMinutesAndMiles(); } And TotalViewModel(Observer) subscribes to notifications by getting the event through GetEvent<..>() namespace TaxiModules.ViewModels { public class TotalViewModel:ObservableBase { .... private IEventAggregator _eventAggregator;   public TotalViewModel(IEventAggregator eventAggregator) { _eventAggregator = eventAggregator; ... }   private void SubscribeToEvents() { _eventAggregator.GetEvent<TaxiStartedEvent>() .Subscribe(OnTaxiStarted, ThreadOption.UIThread,false,(filter) => true); _eventAggregator.GetEvent<TaxiOnMoveEvent>() .Subscribe(OnTaxiMove, ThreadOption.UIThread, false, (filter) => true); _eventAggregator.GetEvent<TaxiResetEvent>() .Subscribe(OnTaxiReset, ThreadOption.UIThread, false, (filter) => true); }   ... private void OnTaxiMove(TaxiOnMove taxiOnMove) { OnMoveFare fare = new OnMoveFare(taxiOnMove); Fares.Add(fare); SetTotalFare(new []{fare}); }   .... 7. MVVM through example In this section we are going to look into MVVM implementation through example.  I have all the modules declared in a single project, TaxiModules, again it is not necessary to have them into one project. Once the user logs into the application, will be greeted with the ‘Engage the Taxi’ screen which is made of two user controls, FareView.xaml and TotalView.Xaml. As you can see from the solution explorer, each of them have their own code behind files and  ViewModel classes, FareViewMode.cs, TotalViewModel.cs Lets take a look in to the FareView and how it interacts with FareViewModel using MVVM implementation. FareView.xaml acts as a view and FareViewMode.cs is it’s view model. The FareView code behind class   namespace TaxiModules.Views { /// <summary> /// Interaction logic for FareView.xaml /// </summary> public partial class FareView : UserControl { public FareView(FareViewModel viewModel) { InitializeComponent(); this.Loaded += (s, e) => { this.DataContext = viewModel; }; } } } The FareView is bound to FareViewModel through the data context  and you shall observe that DataContext is of type Object, i.e. the FareView doesn’t really know the type of ViewModel (FareViewModel). This helps separation of View and ViewModel as View and ViewModel are independent of each other, you can bind FareView to FareViewModel2 as well and the application compiles just fine. Lets take a look into FareView xaml file  <UserControl x:Class="TaxiModules.Views.FareView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Toolkit="clr-namespace:Microsoft.Windows.Controls;assembly=WPFToolkit" xmlns:Commands="clr-namespace:Microsoft.Practices.Prism.Commands;assembly=Microsoft.Practices.Prism"> <Grid Margin="10" > ....   <Border Style="{DynamicResource innerBorder}" Grid.Row="0" Grid.Column="0" Grid.RowSpan="11" Grid.ColumnSpan="2" Panel.ZIndex="1"/>   <Label Grid.Row="0" Content="Engage the Taxi" Style="{DynamicResource innerHeader}"/> <Label Grid.Row="1" Content="Select the State"/> <ComboBox Grid.Row="1" Grid.Column="1" ItemsSource="{Binding States}" Height="auto"> <ComboBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding Name}"/> </DataTemplate> </ComboBox.ItemTemplate> <ComboBox.SelectedItem> <Binding Path="SelectedState" Mode="TwoWay"/> </ComboBox.SelectedItem> </ComboBox> <Label Grid.Row="2" Content="Select the Date of Entry"/> <Toolkit:DatePicker Grid.Row="2" Grid.Column="1" SelectedDate="{Binding DateOfEntry, ValidatesOnDataErrors=true}" /> <Label Grid.Row="3" Content="Enter time 24hr format"/> <TextBox Grid.Row="3" Grid.Column="1" Text="{Binding TimeOfEntry, TargetNullValue=''}"/> <Button Grid.Row="4" Grid.Column="1" Content="Start the Meter" Commands:Click.Command="{Binding StartMeterCommand}" />   <Label Grid.Row="5" Content="Run the Taxi" Style="{DynamicResource innerHeader}"/> <Label Grid.Row="6" Content="Number of Miles &lt;@6mph"/> <TextBox Grid.Row="6" Grid.Column="1" Text="{Binding MilesAtSixMPH, TargetNullValue='', ValidatesOnDataErrors=true}"/> <Label Grid.Row="7" Content="Number of Minutes @12mph"/> <TextBox Grid.Row="7" Grid.Column="1" Text="{Binding MinutesAtTweleveMPH, TargetNullValue=''}"/> <Button Grid.Row="8" Grid.Column="1" Content="Add Minutes and Miles " Commands:Click.Command="{Binding AddMinutesCommand}"/> <Label Grid.Row="9" Content="Other Operations" Style="{DynamicResource innerHeader}"/> <Button Grid.Row="10" Grid.Column="1" Content="Reset the Meter" Commands:Click.Command="{Binding ResetCommand}"/>   </Grid> </UserControl> The highlighted code from the above code shows data binding, for example ComboBox which displays list of states has it’s ItemsSource bound to States property, with DataTemplate bound to Name and SelectedItem  to SelectedState. You might be wondering what are all these properties and how it is able to bind to them.  The answer lies in data context, i.e., when you bound a control, WPF looks for data context on the root object (Grid in this case) and if it can’t find data context it will look into root’s root, i.e. FareView UserControl and it is bound to FareViewModel.  Each of those properties have be declared on the ViewModel for the View to bind correctly. To put simply, View is bound to ViewModel through data context of type object and every control that is bound on the View actually binds to the public property on the ViewModel. Lets look into the ViewModel code (the following code is not an exact copy of FareViewMode.cs, pasted relevant code for this section)   namespace TaxiModules.ViewModels { public class FareViewModel:ObservableBase, IDataErrorInfo { public List<USState> States { get { return USStates.StateList; } }   public USState SelectedState { get { return _selectedState; } set { _selectedState = value; RaisePropertyChanged(_selectedStatePropertyName); } }   public DateTime? DateOfEntry { get { return _dateOfEntry; } set { _dateOfEntry = value; RaisePropertyChanged(_dateOfEntryPropertyName); } }   public TimeSpan? TimeOfEntry { get { return _timeOfEntry; } set { _timeOfEntry = value; RaisePropertyChanged(_timeOfEntryPropertyName); } }   public double? MilesAtSixMPH { get { return _milesAtSixMPH; } set { _milesAtSixMPH = value; RaisePropertyChanged(_distanceAtSixMPHPropertyName); } }   public int? MinutesAtTweleveMPH { get { return _minutesAtTweleveMPH; } set { _minutesAtTweleveMPH = value; RaisePropertyChanged(_minutesAtTweleveMPHPropertyName); } }   public ICommand StartMeterCommand { get { if(_startMeterCommand == null) { _startMeterCommand = new DelegateCommand<object>(OnStartMeter, CanStartMeter); } return _startMeterCommand; } }   public ICommand AddMinutesCommand { get { if(_addMinutesCommand == null) { _addMinutesCommand = new DelegateCommand<object>(OnAddMinutes, CanAddMinutes); } return _addMinutesCommand; } }   public ICommand ResetCommand { get { if(_resetCommand == null) { _resetCommand = new DelegateCommand<object>(OnResetCommand); } return _resetCommand; } }   } private void OnStartMeter(object obj) { _eventAggregator.GetEvent<TaxiStartedEvent>().Publish( new TaxiStarted() { EngagedOn = DateOfEntry.Value.Date + TimeOfEntry.Value, EngagedState = SelectedState.Value });   _isMeterStarted = true; OnPropertyChanged(this,null); } And views communicate user actions like button clicks, tree view item selections, etc using commands. When user clicks on ‘Start the Meter’ button it invokes the method StartMeterCommand, which calls the method OnStartMeter which publishes the event to TotalViewModel using event aggregator  and TaxiStartedEvent. namespace TaxiModules.ViewModels { public class TotalViewModel:ObservableBase { ... private IEventAggregator _eventAggregator;   public TotalViewModel(IEventAggregator eventAggregator) { _eventAggregator = eventAggregator;   InitializePropertyNames(); InitializeModel(); SubscribeToEvents(); }   public decimal? TotalFare { get { return _totalFare; } set { _totalFare = value; RaisePropertyChanged(_totalFarePropertyName); } } .... private void SubscribeToEvents() { _eventAggregator.GetEvent<TaxiStartedEvent>().Subscribe(OnTaxiStarted, ThreadOption.UIThread,false,(filter) => true); _eventAggregator.GetEvent<TaxiOnMoveEvent>().Subscribe(OnTaxiMove, ThreadOption.UIThread, false, (filter) => true); _eventAggregator.GetEvent<TaxiResetEvent>().Subscribe(OnTaxiReset, ThreadOption.UIThread, false, (filter) => true); }   private void OnTaxiStarted(TaxiStarted taxiStarted) { Fares.Add(new EntryFare()); Fares.Add(new StateTaxFare(taxiStarted)); Fares.Add(new NightSurchargeFare(taxiStarted)); Fares.Add(new PeakHourWeekdayFare(taxiStarted));   SetTotalFare(Fares); }   private void SetTotalFare(IEnumerable<IFare> fares) { TotalFare = (_totalFare ?? 0) + TaxiFareHelper.GetTotalFare(fares); } ....   } }   TotalViewModel subscribes to events, TaxiStartedEvent and rest. When TaxiStartedEvent gets invoked it calls the OnTaxiStarted method which sets the total fare which includes entry fee, state tax, nightly surcharge, peak hour weekday fare.   Note that TotalViewModel derives from ObservableBase which implements the method RaisePropertyChanged which we are invoking in Set of TotalFare property, i.e, once we update the TotalFare property it raises an the event that  allows the TotalFare text box to fetch the new value through the data context. ViewModel is communicating with View through data context and it has no knowledge about View, helping in loose coupling of ViewModel and View.   I have attached the source code (.Net 4.0, Prism 4.0, VS 2010) , download and play with it and don’t forget to leave your comments.  

    Read the article

  • SOA PARTNER COMMUNITY NEWSLETTER JULY 2012

    - by mseika
    SOA PARTNER COMMUNITY NEWSLETTER JULY 2012 Dear SOA partner community member To provide our community members the best of our knowledge, we want your feedback on our SOA Partner community. Thus we are organizing SOA Partner Community Survey 2012. We request you to participate in the survey and give your valuable feedback on various areas of marketing, sales and education. To continue our successful BPM Suite, Oracle is launching together with you Process Accelerators initiative. It’s your opportunity to co-develop and market predefined processes. Oracle Fusion Applications Design Patterns are a great tool to develop your SOA or BPM solution or process accelerators. To promote your SOA & BPM Specialization we continue to offer several benefits. This month we would like to highlight our Specialization Plaques - make sure you request one for your office! Our Fusion Middleware Summer Camps are booked out, if could not get a seat you can attend the SOA & BPM track @ Virtual Developer Day: Oracle Fusion Development Oracle demo systems offer´s two new demos: Business Driven Development based on BPM Suite & SOA Lifecycle Management. Jürgen KressOracle SOA & BPM Partner Adoption EMEA NEW CONTENT Community SurveyProcess Accelerators KitPlaques SOA & BPM SpecializedSOA & BPM at Virtual Developer Day News from our Partners & CommunityOverview of SOA Diagnostics in 11.1.1.6 Business driven development(BDD) demo now available! SOA Lifecycle Management Oracle Fusion applications design patterns Updated material by Oracle Connect and Network SOA Blogs SOA on Facebook SOA on LinkedIn SOA on Twitter Mix SOA Forum COMMUNITY SURVEY Like every year we would like to get your feedback in our SOA Partner Community Survey 2012. Make sure that You attend to further develop our community and support our planning! It is key for us to get your feedback to prepare for the next fiscal year. Back to top PROCESS ACCELERATORS KIT Oracle is very interested to co-develop and market with you, our partners, pre-defined processes for BPM Suite.I am very happy to announce a new program called “Oracle BPM Partner Solution Catalog”. This program will provide a one-stop shop for our customers looking for Oracle BPM partner solutions available in the market today.The Oracle BPM Solution Catalog will be hosted on our very popular Oracle Technology Network (OTN). To give you an idea of the scale of customer visibility, OTN today receives over 1Million hits per day from our business and developer community. We would like to invite you to list your Oracle BPM 11g solutions available today.In order to participate in this program, you need to do the following: Fill in the attached slide templates - #3 and #4 for each Oracle BPM 11g solution you would like to list on OTN.Please add links to whitepapers , videos, references to the specific solution in the template slide. We recommend that you create a landing page on your website for these linked artifacts and just point to the same from within the PowerPoint template. This will give you the flexibility to update the information as frequently as needed. If you have the particular solution in production or a reference available, please list them as well. Send the PowerPoint template slides (1 set of slides for each Oracle BPM solution) to [email protected]. In addition to having the opportunity to list your solutions on OTN for Oracle customers, you will have the chance to advertise your new wins/implementations/solutions in an Oracle Sponsored PM Webinar held every quarter. This program is targeted to go live by the end of summer 2012. At this point, we are targeting a soft launch in July end 2012 so send on your BPM solutions information as soon as possible. We would love to have your solution(s) listed in the “Oracle BPM Partner Solution Catalog” at the time of the launch. This will be a live repository so you can keep adding more solutions as they become available. If you have any questions, please feel free to contact us [email protected], Product Strategy Director, Oracle BPM , Phone +1 650.506.5486.Thank you and look forward to hearing from you. Oracle BPM team Process Accelerators Overview.pdf ProcessAcceleratorsDataSheet.pdf Demos draUPK.zip & trmUPK.zip BPM Solution repository slides.ppt Additional BPM material BPM Process Development Lifecycle Document that describes recommended approach to collaborative process modeling across business and IT tools ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization) by Andrejus Baranovskis BPMN process editor problems in 11.1.1.6 by Mark Nelson BPM – Disable DBMS job to refresh B2B Materialized View by Mark Nelson For the complete kit please visit the BPM folder at our SOA Community Workspace (SOA Community membership required). For the complete presentation please visit our SOA Community Workspace (SOA Community membership required). Information is Oracle and Partner confidential! Back to top PLAQUES SOA & BPM SPECIALIZED We continue to offer you a nice SOA & BPM Specialization plaque with your logo to proof your success. If you are a SOA or BPM Specialized partner and would like to request the plaque please send Brigitte an e-mail with the following information: Partner Name Partner logo (preferred eps file) Partner Status gold or platinum Your shipping address Your Specialization: SOA or BPM We recommend to mount the plaque at your office reception in addition you can use the SOA Specialization logos at your website download Logo: Gold & Platinum or the BPM logos Gold & Platinum Back to top SOA & BPM AT VIRTUAL DEVELOPER DAY Register now for this FREE hands-on online workshop Get up to date and learn everything you wanted to know about Oracle ADF & Fusion Development plus live Q&A chats with Oracle technical staffOracle Application Development Framework (ADF) is the standards based, strategic framework for Oracle Fusion Applications and Oracle Fusion Middleware. Oracle ADF’s integration with the Oracle SOA Suite, Oracle WebCenter and Oracle BI creates a complete productive development platform for your custom applications.Join us at this FREE virtual event and learn the latest in Fusion Development including: Is Oracle ADF development faster and simpler than Forms, Apex or .Net? Mobile Application Development with ADF Mobile Oracle ADF development with Eclipse Oracle WebCenter Portal and ADF Development Application Lifecycle Management with ADF Building Process Centric Applications with ADF and BPM Oracle Business Intelligence and ADF Integration Live Q&A chats with Oracle technical staff Developer lead, manager or architect - this event has something for everyone. Don’t miss this opportunity.Tuesday, July 10, 2012. 9:00 a.m. PT -1:00 p.m. PT 11:00 a.m. CT - 3:00 p.m. CT 12:00 p.m. ET - 4:00 p.m. ET 1:00 p.m. BRT - 5:00 p.m. BRT Register online now! for this FREE event. Agenda: 09:00 am Opening 09:30 am Keynote: Oracle Fusion Development Track1Introduction to Fusion Development Track2What's New in Fusion Development Track3Fusion Development in the Enterprise 10:00 am Is Oracle ADF Development Faster and Simpler than Oracle Forms, APEX or .Net? Mobile Application Development with ADF Mobile Oracle WebCenter Portal and ADF Development 11:00 am Rich Web UI made simple - an ADF Faces Overview Oracle Enterprise Pack for Eclipse - ADF Development Building Process Centric Applications with ADF and BPM 12:00 noon Next Generation Controller for JSF Application Lifecycle Management for ADF Oracle Business Intelligence and ADF Integration *Hands On Lab – WebCenter and ADF Lab w/ JDeveloper - Lab materials will be provided ahead of the event to give you ample time to work through the lab and increase the productivity of the live chat sessions the day of the event. Sessions abstractsRegister online now! for this FREE event Read more on Community Events and post your comment here. Back to top NEWS FROM OUR PARTNERS AND COMMUNITY Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity JDeveloper & ADF?Troubleshooting BPMN process editor problems in 11.1.1.6http://dlvr.it/1p0FfS SOA Community?SOA & BPM @ Virtual Developer Day: Oracle Fusion Development - July 10th 2012https://soacommunity.wordpress.com/2012/07/02/soa-bpm-virtual-developer-day- oracle-fusion-developmentjuly-10th-2012/#soacommunity #soa #bom #education orclateamsoa ?A-Team Blog #ateam: BAM design pointers - In working recently with a large Oracle customer on SOA and BAM, I discove.http://ow.ly/1kYqES SOA CommunitySOA Community Newsletter June 2012http://wp.me/p10C8u-qw SOA CommunityBPMN process editor problems in 11.1.1.6 by Mark Nelsonhttp://redstack.wordpress.com/2012/06/27/ bpmn-process-editor-problems-in-11-1-1-6 #soacommunity #bpm OTNArchBeat ?SOA Learning Library: free short, topic-focused training on Oracle SOA & BPM products | @SOACommunity http://pub.vitrue.com/NE1G Andrejus Baranovskis ?ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization)http://fb.me/1coX4r1X1 SOA CommunitySOA Learning Library provides a comprehensive curriculum for the SOA and BPM product suites https://soacommunity.wordpress.com/2012/06/27/soa-learning-library #soacommunity #soa #bpm OTNArchBeat ?A Universal JMX Client for Weblogic - Part 1: Monitoring BPEL Thread Pools in SOA 11g | Stefan Koserhttp://pub.vitrue.com/mQVZ OTNArchBeat ?BPM - Disable DBMS job to refresh B2B Materialized View | Mark Nelson http://pub.vitrue.com/3PR0Oracle SOA ?Learn how Choice Hotels Implements Innovative Google Maps Solution with #OracleSOA http://bit.ly/MTwIJ3 SOA Communitytop Tweets SOA Partner Community - June 2012 Send your tweets @soacommunity #soacommunity https://soacommunity.wordpress.com/2012/06/25/top-tweets-soa-partner-community-june-2012 Torsten Winterberg#OPITZ is pushing Oracle commitment to the next level: New Specializations done: ADF, BPM, WLS, Exadatahttp://bit.ly/KX1WVS ServiceTechSymposium ?Only 8 more days left until Super Early Bird Registration Discount expires! http://www.servicetechsymposium.com OracleBlogsSOA Management in 3 minutes - Video explainerhttp://ow.ly/1kN5pn SOA Community ?SOA, Cloud & Service Technology Symposium 2012 London - Enter Promo Code: Djmxz370https://soacommunity.wordpress.com/2012/06/22/soa-cloud-service-technology-symposium-2012-london #soasymposium #soacommunity #soa Heidi BuelowGreat course! w David Read RT @soacommunity: product management ADF for BPM training 5 seats left https://soacommunity.wordpress.com/2012/06/12/fusion-middleware-summer-campsadvanced-partner-trainings/ #bpm #soacommunity SOA Community ?product management ADF for BPM training 5 seats lefthttps://soacommunity.wordpress.com/2012/06/12/fusion-middleware-summer-campsadvanced-partner-trainings/ #bpm #soacommunity OTNArchBeat ?Oacle Fusion Applications Design Patterns Now Available For Developers | Ultan O'Broinhttp://pub.vitrue.com/UEiF OTNArchBeat ?SOA, Cloud & Service Technology Symposium 2012London - Special Oracle Discounthttp://pub.vitrue.com/8E0J SOA CommunityBecome a facebook fan of soacommunity http://www.facebook.com/soacommunity #soacommunity SOA Community ?SOA Suite HealthCare Integration Architecture https://blogs.oracle.com/SOAForHealthcare/entry/soa_suite_healthcare_integration_architecture #soacommunity #soa Andrejus Baranovskis ?Running Pre-built Virtual Machine for SOA Suite and BPM Suite 11g PS5 on Mac OS X Snow Leopard (10.6http://fb.me/vB8nO0Vg OracleBlogsPrinciples of Service-Oriented Architecture by Douwe P. van den Bos http://ow.ly/1kIcOP OTNArchBeatOracle Public Cloud Architecture | @TylerJewell http://ow.ly/bHAcL The SOA Network ?Business Process Management, Service-Oriented Architecture, and Web 2.0: Business Transformation or.http://bit.ly/LBgREL #ITNews #SOA OracleBlogs ?Oracle SOA Foundation Practitioner Certificationhttp://ow.ly/1kGYYg Frank Nimphius ?Learn Advanced ADF. ORACLE Fusion Middleware Summer Camps in Lisbon - July 9th - 13thhttp://bit.ly/KGCl3i SOA CommunityTransform Your Application Integration with Best Practices from Oracle Customershttps://blogs.oracle.com/SOA/entry/transform_your_application_integration_with #soacommunity #soa #bpm Simone GeibWhat you always wanted to know about #oraclesoa diagnostics: Shawn Bailey, Overview of SOA Diagnostics in 11.1.1.6,http://ow.ly/bxK0M Oracle SOA ?Save the date: Jun 21 10AM, SOA & BPM Customer Insight Series. Hear how Choice Hotels went from legacy to #oraclesoa http://bit.ly/LsNDGl OTNArchBeat ?New VirtualBox images for Oracle SOA Suite & Oracle BPM Suite 11.1.1.6.0http://ow.ly/bwDAl OracleBlogs ?Process development lifecycle in Oracle BPM 11g http://ow.ly/1ktesY Daniel AmadeiNew post: Oracle BPEL 11g Message Delivery & Recovery.http://amadei.com.br/blog/index.php /oracle-bpel-11g-message-delivery SOA Community ?Sending out the June edition of the #soacommunity newsletter - read it or become a member http://www.oracle.com/goto/emea/soa!#soa #bpm Arun Pareek ?For the past six months Ahmed Aboulnaga and me have been working on Oracle SOA Suite 11g Administrator's Handbook.http://lnkd.in/CAvpUQ SOA CommunitySun shine all day no clouds - solar eclipse is over... #sunshine #cloud http://www.infoq.com/presentations/Swarm-Computing Michel SchildmeijerWatch my blog Oracle Service Bus 11g: listing projects and services with WLST - part 1 http://lnkd.in/B7f3GQ @TITAN_GS @wlscommunity OTNArchBeatBook Review: Oracle Application Integration Architecture (AIA) Foundation Pack 11gR1: Essentials | Rajesh Rahejahttp://ow.ly/bn2cc OTNArchBeat ?Driving from Business Architecture to Business Process Services | @vghariharan http://ow.ly/bn5UB OTNArchBeat ?SOA Analysis within the Department of Defense Architecture Framework (DoDAF) 2.0 - Part II | Dawit Lessanu http://ow.ly/bn6sX Simone Geib ?Contact me directly for ideas how to improvehttp://bit.ly/advancedsoasuite and additional posts, presentations, white papers, ... #soasuite Simone Geib ?#soasuite advanced OTN page has become too cluttered. Broke it into separate pages to start with. http://bit.ly/advancedsoasuite OracleBlogs ?June Webcast: SOA Gateway Implementation and Troubleshooting (2 sessions) http://ow.ly/1kbRFA ServiceTechSymposium ?New session just posted to calendar: "NoSQL for Data Services, Data Virtualization & Big Data" by Guido Schmutz, Trivadis AG ://ow.ly/bjjOeDebra Lilley ?looks good - real proof people are using the apps ! RT @fteter: Very cool Fusion Applications Help site: http://bit.ly/L3nvOR #FusionApps demed ?rapid proliferation of cloud computing will drive convergence of SOA and cloud paradigms" http://ovum.com/2012/05/18/soa-paves-the-way-for-cloud/ SOA CommunityMiddleware Oracle Excellence Awards 2012-HAPPY NEW YEAR! https://soacommunity.wordpress.com/2012/05/31/middleware-oracle-excellence-awards-2012happy-new-year/ #soacommunity #opn #opnaward #specialization #oracle SOA CommunityHappy New Year #soacommunity thanks for the business! Time for a drink http://pic.twitter.com/zkK08KWB OTNArchBeat ?Who should ‘own’ the Enterprise Architecture? | Michael Glas http://bit.ly/K0ge0Q SOA Communitytop Tweets SOA Partner Community &ndash; May 2012 http://wp.me/p10C8u-pP ServiceTechSymposiumNew session just posted to Symposium calendar: "Elastic SOA in the Cloud" by Steve Millidge, C2B2 Consulting http://www.servicetechsymposium.com/agenda2012.php #elastic_soa_in_the_cloud orclateamsoa ?A-Team Blog #ateam: How to Set JVM Parameters in Oracle SOA 11Ghttp://ow.ly/1k2cnl ServiceTechSymposium ?New session just posted to Symposium calendar: "SOA Governance at EDP: A Global Energy Company" by Manuel Rosa, Linkhttp://www.servicetechsymposium.com/agenda2012.php#soa_governance_at_edp SOA Community ?VirtualBox image SOA Suite & BPM Suite 11.1.1.6.0&ndash;Your feedback?http://wp.me/p10C8u-qh Oracle MiddlewareSave the date: Jun 21 10AM, SOA & BPM Customer Insight Series. Hear how Choice Hotels went from legacy to#oraclesoa http://bit.ly/LU1y5N OTNArchBeat ?Goodbye, Silos. Hello SOA. | @stephanieoverbyhttp://pub.vitrue.com/NJJO SOA CommunityBPM Standard Edition - to start your BPM project http://wp.me/p10C8u-qj Please feel free to send us your news! And add your blog to our SOA blog wiki. Back to top OVERVIEW OF SOA DIAGNOSTICS IN 11.1.1.6 What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections.In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. read the full blog post here. Read more on Oracle and post your comment here. Back to top BUSINESS DRIVEN DEVELOPMENT (BDD) DEMO NOW AVAILABLE! For access to the Oracle demo systems please visit OPN and talk to your Partner Expert DSS is pleased to announce the availability of the demo “Business Driven Development“. This innovative demonstration uses a case-study approach to show business users how they can easily streamline their Business Processes - delivering greater efficiency, agility, visibility and collaboration with Oracle BPM and WebCenter. The BDD demonstration uses a case study-based approach to highlight a business problem at a fictional company, Avitek Corporation, and uses Oracle BPM and Oracle WebCenter to solve the business problem. This holistic approach has specifically been used to appeal to a non-technical business analyst user. This demo is NOT focused on product features, but aims to guide users through a complete BPM lifecycle. The scenario is based on improving a simple order process (scenario details are in the demo script). Avitek Corporation is sufferinng from a manual email-driven ordering process. Sales reps don’t know where the customer orders are stuck (no visibility) and finance users are unable to manually approve every order (no automation). There are several areas where this process can be improved with Business Process Management technology. This demo shows how improving following areas will ignificantly help resolve the business problems Avitek Corporation is facing. Areas for improvement include: Utilizing BPM for process management, rather than an unregulated, email-based process. Utilizing automated services, rather than requiring a human to key into a system. For example, Finance checking the customer’s credit rating is something that could be automated. Centralizing business rules that can be integrated into a business process, rather than requiring a human to process them. For example, Finance must determine when orders can be automatically approved. Provide insight and visibility into the process. For example, Sales Rep needs to know the status of their customer’s orders. The BDD Demo uses the following products. Oracle BPM Suite 11g PS4FP Oracle WebCenter 11g PS4FP (for Process Spaces) Oracle Business Activity Monitoring 11g Oracle Database 11g Back to top SOA LIFECYCLE MANAGEMENT For access to the Oracle demo systems please visit OPN and talk to your Partner Expert We are pleased to announce the availability of the SOA Management demo that showcases some of the key provisioning and lifecycle management capabilities of SOA Management Pack Enterprise Edition (EE). This demo specifically focuses on some of the lifecycle management solutions for Oracle SOA Suite and Oracle Service Bus (OSB). Demo Highlights The demo showcases the following capabilities. Provisioning of SOA Composites Provisioning of OSB Projects Provision SOA and OSB artifacts in a future maintenance window Back to top ORACLE FUSION APPLICATIONS DESIGN PATTERNS The Oracle Fusion Applications user experience design patterns are published! These new, reusable usability solutions and best-practices, which will join the Oracle dashboard patterns and guidelines that are already available online, are used by Oracle to artfully bring to life a new standard in the user experience, or UX, of enterprise applications. Now, the Oracle applications development community can benefit from the science behind the Oracle Fusion Applications user experience, too. These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight when: tailoring an Oracle Fusion application, creating coexistence solutions that existing users will be delighted with, thus enabling graceful user transitions to Oracle Fusion Applications down the road, or designing exciting, new, highly usable applications in the cloud or on-premise. Based on the Oracle Application Development Framework (ADF) components, the Oracle Fusion Applications patterns and guidelines are proven with real users and in the Applications UX usability labs, so you can get right to work coding productivity-enhancing designs that provide an advantage for your entire business. What’s the best way to get started? We’ve made that easy, too. The Design Filter Tool (DeFT) selects the best pattern for your user type and task. Simply adapt your selection for your own task flow and content, and you’re on your way to a really great applications user experience. More Oracle applications design patterns and training are coming your way in the future. To provide feedback on the sets that are currently available, let me know in the comments! Read more on Fusionapps and post your comment here. Back to top UPDATED ORACLE MATERIAL Integrated SOA Gateway Documentation - Implementation Guide | Developer’s Guide Webcast Series: Oracle’s SOA and Oracle Business Process Management Solutions (Choice Hotels, Eaton, Farmers Insurance) BAM design pointers By Kavitha Srinivasan Seeking Oracle Fusion Middleware Go Live StoriesOracle Fusion Middleware product management is looking for recent go live stories to share with the Oracle sales team, sales consulting, product management and other internal groups. Customer contact details may remain anonymous. Your successful implementation will be featured in a quarterly report. The chance to present on an internal webcast is also available. Contact Maria Forney ([email protected]) if you have a noteworthy implementation success story. This is a good opportunity for partners interested in showcasing Oracle Fusion Middleware implementations, and gaining more exposure within Oracle. Performance tuning resources. All in one: docs, blogs, WPs, ppts: http://bit.ly/soa_resources Back to top HAVE YOU MISSED OUR LAST SOA PARTNER COMMUNITY WEBCASTS? UPK Webcast Business Driven Application Management & BPM11g & Application Grid & GoldenGate & Fusion Middleware Pricing & OC4J to WebLogic & Next Generation SOA & Fusion Middleware in Utility & Fusion Middleware in Communications & Fusion Middleware in Public Services & Fusion Middleware in Financial Services Please check your local OPN trainings calendar for additional training dates and locations. Back to top SOA PARTNER COMMUNITY CALENDAR On-Demand Trainings Event Name Language Type SOA Virtual Developers Day English Tech In-Class Trainings Date Event name Location / Country Contact person Type 09-13.07.2012 BPM Suite 11g advanced training by David Read Lisbon, Portugal Jürgen Kress Tech 09-13.07.2012 ADF 11g advanced training by Grant Ronald and Frank Nimphius Lisbon, Portugal Jürgen Kress Tech 09-13.07.2012 WebCenter Portal advanced training by Stefan Krantz and Angelo Santagata Lisbon, Portugal Jürgen Kress Tech 10.07.2012 Fusion Middleware Virtual Developer Day Online OTN Tech 10- 12.07.2012 WebLogic 12c training by Cosmin Tudor Lisbon, Portugal Jürgen Kress Tech 16-18.07.2012 SOA Suite 11g advanced training by Niall Commiskey Munich, Germany Jürgen Kress Tech 16-18.07.2012 ADF for BPM Suite 11g advanced training by David Read Munich, Germany Jürgen Kress Tech 16-18.07.2012 WebCenter Sites 11g advanced training by Product Management Munich, Germany Jürgen Kress Tech 17-20.07.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 23-26.07.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 29-31.08.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 02-05.10.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 15-18.10.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 28-30.11.2012 Oracle AIA 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 11-14.12.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 20-22.2.2013 Oracle AIA 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 14-17.1.2013 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 15-18.3.2013 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech Please check your local OPN Training Calendar for additional training and locations here. Back to top SOASCHOOL.COM - SOA CERTIFIED PROFESSIONAL(SOACP) PROGRAM The SOASchool.com - SOA Certified Professional (SOACP) program is dedicated to excellence in the field of SOA and service-oriented computing. Through a series of seasoned course modules and exams, IT professionals have the opportunity to obtain a number of different certifications to recognize their accomplishment of gaining "project ready" SOA proficiency. This comprehensive and strictly vendor-neutral program was developed in cooperation with best-selling SOA author Thomas Erl and several major SOA organizations and academic institutions. Through the involvement of the SOA Education Committee, course contents and certification requirements are constantly reviewed and revised to stay current with developments in the service-oriented computing industry. The program is currently comprised of 12 course modules and 5 certifications and is expanding to 18 course modules and 8 certifications throughout 2009. For more information, visit www.soaschool.com and www.soacp.com. Blog Twitter LinkedIn Mix Forum Wiki Back to top YOUR CONTENT ON THE NEWSLETTER AND ON THE SOA COMMUNITY PORTAL Publishing Your StoriesWe would like to invite our partners to publish information in the newsletter or on our SOA Community portal. Especially we are looking for your real life experience with our SOA technology. Please send your documents to Jürgen Kress. We look forward to getting your suggestions! Back to top SOA DISCUSSION FORUM BECOMES INTERACTIVE AT THE SOA COMMUNITY! Do you want to chat to experts, including partners and Oracle SOA Product Development? Do you want to get the latest information about our SOA solutions and events?Attend our private online SOA Discussion Forum at OTN. Please send your OTN forums user name to Brigitte Felisaz. You must be a registered user to access the SOA Discussion Forum. Back to top INVITE YOUR COLLEAGUES TO JOIN THE SOA COMMUNITY Please feel free to invite your colleagues to join the SOA Community and to participate in the SOA Assessment tests. For registration please login the Oracle PartnerNetwork and go to: www.oracle.com/goto/emea/soa For any questions on the above or concerning SOA and Oracle in general please contact the Oracle EMEA Alliances & Channels SOA Team. Best regardsOracle EMEA SOA TeamJürgen Kress Jürgen KressSOA Partner Adoption EMEATel. +49 89 1430 1479E-Mail: [email protected]

    Read the article

  • An Introduction to ASP.NET Web API

    - by Rick Strahl
    Microsoft recently released ASP.NET MVC 4.0 and .NET 4.5 and along with it, the brand spanking new ASP.NET Web API. Web API is an exciting new addition to the ASP.NET stack that provides a new, well-designed HTTP framework for creating REST and AJAX APIs (API is Microsoft’s new jargon for a service, in case you’re wondering). Although Web API ships and installs with ASP.NET MVC 4, you can use Web API functionality in any ASP.NET project, including WebForms, WebPages and MVC or just a Web API by itself. And you can also self-host Web API in your own applications from Console, Desktop or Service applications. If you're interested in a high level overview on what ASP.NET Web API is and how it fits into the ASP.NET stack you can check out my previous post: Where does ASP.NET Web API fit? In the following article, I'll focus on a practical, by example introduction to ASP.NET Web API. All the code discussed in this article is available in GitHub: https://github.com/RickStrahl/AspNetWebApiArticle [republished from my Code Magazine Article and updated for RTM release of ASP.NET Web API] Getting Started To start I’ll create a new empty ASP.NET application to demonstrate that Web API can work with any kind of ASP.NET project. Although you can create a new project based on the ASP.NET MVC/Web API template to quickly get up and running, I’ll take you through the manual setup process, because one common use case is to add Web API functionality to an existing ASP.NET application. This process describes the steps needed to hook up Web API to any ASP.NET 4.0 application. Start by creating an ASP.NET Empty Project. Then create a new folder in the project called Controllers. Add a Web API Controller Class Once you have any kind of ASP.NET project open, you can add a Web API Controller class to it. Web API Controllers are very similar to MVC Controller classes, but they work in any kind of project. Add a new item to this folder by using the Add New Item option in Visual Studio and choose Web API Controller Class, as shown in Figure 1. Figure 1: This is how you create a new Controller Class in Visual Studio   Make sure that the name of the controller class includes Controller at the end of it, which is required in order for Web API routing to find it. Here, the name for the class is AlbumApiController. For this example, I’ll use a Music Album model to demonstrate basic behavior of Web API. The model consists of albums and related songs where an album has properties like Name, Artist and YearReleased and a list of songs with a SongName and SongLength as well as an AlbumId that links it to the album. You can find the code for the model (and the rest of these samples) on Github. To add the file manually, create a new folder called Model, and add a new class Album.cs and copy the code into it. There’s a static AlbumData class with a static CreateSampleAlbumData() method that creates a short list of albums on a static .Current that I’ll use for the examples. Before we look at what goes into the controller class though, let’s hook up routing so we can access this new controller. Hooking up Routing in Global.asax To start, I need to perform the one required configuration task in order for Web API to work: I need to configure routing to the controller. Like MVC, Web API uses routing to provide clean, extension-less URLs to controller methods. Using an extension method to ASP.NET’s static RouteTable class, you can use the MapHttpRoute() (in the System.Web.Http namespace) method to hook-up the routing during Application_Start in global.asax.cs shown in Listing 1.using System; using System.Web.Routing; using System.Web.Http; namespace AspNetWebApi { public class Global : System.Web.HttpApplication { protected void Application_Start(object sender, EventArgs e) { RouteTable.Routes.MapHttpRoute( name: "AlbumVerbs", routeTemplate: "albums/{title}", defaults: new { symbol = RouteParameter.Optional, controller="AlbumApi" } ); } } } This route configures Web API to direct URLs that start with an albums folder to the AlbumApiController class. Routing in ASP.NET is used to create extensionless URLs and allows you to map segments of the URL to specific Route Value parameters. A route parameter, with a name inside curly brackets like {name}, is mapped to parameters on the controller methods. Route parameters can be optional, and there are two special route parameters – controller and action – that determine the controller to call and the method to activate respectively. HTTP Verb Routing Routing in Web API can route requests by HTTP Verb in addition to standard {controller},{action} routing. For the first examples, I use HTTP Verb routing, as shown Listing 1. Notice that the route I’ve defined does not include an {action} route value or action value in the defaults. Rather, Web API can use the HTTP Verb in this route to determine the method to call the controller, and a GET request maps to any method that starts with Get. So methods called Get() or GetAlbums() are matched by a GET request and a POST request maps to a Post() or PostAlbum(). Web API matches a method by name and parameter signature to match a route, query string or POST values. In lieu of the method name, the [HttpGet,HttpPost,HttpPut,HttpDelete, etc] attributes can also be used to designate the accepted verbs explicitly if you don’t want to follow the verb naming conventions. Although HTTP Verb routing is a good practice for REST style resource APIs, it’s not required and you can still use more traditional routes with an explicit {action} route parameter. When {action} is supplied, the HTTP verb routing is ignored. I’ll talk more about alternate routes later. When you’re finished with initial creation of files, your project should look like Figure 2.   Figure 2: The initial project has the new API Controller Album model   Creating a small Album Model Now it’s time to create some controller methods to serve data. For these examples, I’ll use a very simple Album and Songs model to play with, as shown in Listing 2. public class Song { public string AlbumId { get; set; } [Required, StringLength(80)] public string SongName { get; set; } [StringLength(5)] public string SongLength { get; set; } } public class Album { public string Id { get; set; } [Required, StringLength(80)] public string AlbumName { get; set; } [StringLength(80)] public string Artist { get; set; } public int YearReleased { get; set; } public DateTime Entered { get; set; } [StringLength(150)] public string AlbumImageUrl { get; set; } [StringLength(200)] public string AmazonUrl { get; set; } public virtual List<Song> Songs { get; set; } public Album() { Songs = new List<Song>(); Entered = DateTime.Now; // Poor man's unique Id off GUID hash Id = Guid.NewGuid().GetHashCode().ToString("x"); } public void AddSong(string songName, string songLength = null) { this.Songs.Add(new Song() { AlbumId = this.Id, SongName = songName, SongLength = songLength }); } } Once the model has been created, I also added an AlbumData class that generates some static data in memory that is loaded onto a static .Current member. The signature of this class looks like this and that's what I'll access to retrieve the base data:public static class AlbumData { // sample data - static list public static List<Album> Current = CreateSampleAlbumData(); /// <summary> /// Create some sample data /// </summary> /// <returns></returns> public static List<Album> CreateSampleAlbumData() { … }} You can check out the full code for the data generation online. Creating an AlbumApiController Web API shares many concepts of ASP.NET MVC, and the implementation of your API logic is done by implementing a subclass of the System.Web.Http.ApiController class. Each public method in the implemented controller is a potential endpoint for the HTTP API, as long as a matching route can be found to invoke it. The class name you create should end in Controller, which is how Web API matches the controller route value to figure out which class to invoke. Inside the controller you can implement methods that take standard .NET input parameters and return .NET values as results. Web API’s binding tries to match POST data, route values, form values or query string values to your parameters. Because the controller is configured for HTTP Verb based routing (no {action} parameter in the route), any methods that start with Getxxxx() are called by an HTTP GET operation. You can have multiple methods that match each HTTP Verb as long as the parameter signatures are different and can be matched by Web API. In Listing 3, I create an AlbumApiController with two methods to retrieve a list of albums and a single album by its title .public class AlbumApiController : ApiController { public IEnumerable<Album> GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); return albums; } public Album GetAlbum(string title) { var album = AlbumData.Current .SingleOrDefault(alb => alb.AlbumName.Contains(title)); return album; }} To access the first two requests, you can use the following URLs in your browser: http://localhost/aspnetWebApi/albumshttp://localhost/aspnetWebApi/albums/Dirty%20Deeds Note that you’re not specifying the actions of GetAlbum or GetAlbums in these URLs. Instead Web API’s routing uses HTTP GET verb to route to these methods that start with Getxxx() with the first mapping to the parameterless GetAlbums() method and the latter to the GetAlbum(title) method that receives the title parameter mapped as optional in the route. Content Negotiation When you access any of the URLs above from a browser, you get either an XML or JSON result returned back. The album list result for Chrome 17 and Internet Explorer 9 is shown Figure 3. Figure 3: Web API responses can vary depending on the browser used, demonstrating Content Negotiation in action as these two browsers send different HTTP Accept headers.   Notice that the results are not the same: Chrome returns an XML response and IE9 returns a JSON response. Whoa, what’s going on here? Shouldn’t we see the same result in both browsers? Actually, no. Web API determines what type of content to return based on Accept headers. HTTP clients, like browsers, use Accept headers to specify what kind of content they’d like to see returned. Browsers generally ask for HTML first, followed by a few additional content types. Chrome (and most other major browsers) ask for: Accept: text/html, application/xhtml+xml,application/xml; q=0.9,*/*;q=0.8 IE9 asks for: Accept: text/html, application/xhtml+xml, */* Note that Chrome’s Accept header includes application/xml, which Web API finds in its list of supported media types and returns an XML response. IE9 does not include an Accept header type that works on Web API by default, and so it returns the default format, which is JSON. This is an important and very useful feature that was missing from any previous Microsoft REST tools: Web API automatically switches output formats based on HTTP Accept headers. Nowhere in the server code above do you have to explicitly specify the output format. Rather, Web API determines what format the client is requesting based on the Accept headers and automatically returns the result based on the available formatters. This means that a single method can handle both XML and JSON results.. Using this simple approach makes it very easy to create a single controller method that can return JSON, XML, ATOM or even OData feeds by providing the appropriate Accept header from the client. By default you don’t have to worry about the output format in your code. Note that you can still specify an explicit output format if you choose, either globally by overriding the installed formatters, or individually by returning a lower level HttpResponseMessage instance and setting the formatter explicitly. More on that in a minute. Along the same lines, any content sent to the server via POST/PUT is parsed by Web API based on the HTTP Content-type of the data sent. The same formats allowed for output are also allowed on input. Again, you don’t have to do anything in your code – Web API automatically performs the deserialization from the content. Accessing Web API JSON Data with jQuery A very common scenario for Web API endpoints is to retrieve data for AJAX calls from the Web browser. Because JSON is the default format for Web API, it’s easy to access data from the server using jQuery and its getJSON() method. This example receives the albums array from GetAlbums() and databinds it into the page using knockout.js.$.getJSON("albums/", function (albums) { // make knockout template visible $(".album").show(); // create view object and attach array var view = { albums: albums }; ko.applyBindings(view); }); Figure 4 shows this and the next example’s HTML output. You can check out the complete HTML and script code at http://goo.gl/Ix33C (.html) and http://goo.gl/tETlg (.js). Figu Figure 4: The Album Display sample uses JSON data loaded from Web API.   The result from the getJSON() call is a JavaScript object of the server result, which comes back as a JavaScript array. In the code, I use knockout.js to bind this array into the UI, which as you can see, requires very little code, instead using knockout’s data-bind attributes to bind server data to the UI. Of course, this is just one way to use the data – it’s entirely up to you to decide what to do with the data in your client code. Along the same lines, I can retrieve a single album to display when the user clicks on an album. The response returns the album information and a child array with all the songs. The code to do this is very similar to the last example where we pulled the albums array:$(".albumlink").live("click", function () { var id = $(this).data("id"); // title $.getJSON("albums/" + id, function (album) { ko.applyBindings(album, $("#divAlbumDialog")[0]); $("#divAlbumDialog").show(); }); }); Here the URL looks like this: /albums/Dirty%20Deeds, where the title is the ID captured from the clicked element’s data ID attribute. Explicitly Overriding Output Format When Web API automatically converts output using content negotiation, it does so by matching Accept header media types to the GlobalConfiguration.Configuration.Formatters and the SupportedMediaTypes of each individual formatter. You can add and remove formatters to globally affect what formats are available and it’s easy to create and plug in custom formatters.The example project includes a JSONP formatter that can be plugged in to provide JSONP support for requests that have a callback= querystring parameter. Adding, removing or replacing formatters is a global option you can use to manipulate content. It’s beyond the scope of this introduction to show how it works, but you can review the sample code or check out my blog entry on the subject (http://goo.gl/UAzaR). If automatic processing is not desirable in a particular Controller method, you can override the response output explicitly by returning an HttpResponseMessage instance. HttpResponseMessage is similar to ActionResult in ASP.NET MVC in that it’s a common way to return an abstract result message that contains content. HttpResponseMessage s parsed by the Web API framework using standard interfaces to retrieve the response data, status code, headers and so on[MS2] . Web API turns every response – including those Controller methods that return static results – into HttpResponseMessage instances. Explicitly returning an HttpResponseMessage instance gives you full control over the output and lets you mostly bypass WebAPI’s post-processing of the HTTP response on your behalf. HttpResponseMessage allows you to customize the response in great detail. Web API’s attention to detail in the HTTP spec really shows; many HTTP options are exposed as properties and enumerations with detailed IntelliSense comments. Even if you’re new to building REST-based interfaces, the API guides you in the right direction for returning valid responses and response codes. For example, assume that I always want to return JSON from the GetAlbums() controller method and ignore the default media type content negotiation. To do this, I can adjust the output format and headers as shown in Listing 4.public HttpResponseMessage GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); // Create a new HttpResponse with Json Formatter explicitly var resp = new HttpResponseMessage(HttpStatusCode.OK); resp.Content = new ObjectContent<IEnumerable<Album>>( albums, new JsonMediaTypeFormatter()); // Get Default Formatter based on Content Negotiation //var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); resp.Headers.ConnectionClose = true; resp.Headers.CacheControl = new CacheControlHeaderValue(); resp.Headers.CacheControl.Public = true; return resp; } This example returns the same IEnumerable<Album> value, but it wraps the response into an HttpResponseMessage so you can control the entire HTTP message result including the headers, formatter and status code. In Listing 4, I explicitly specify the formatter using the JsonMediaTypeFormatter to always force the content to JSON.  If you prefer to use the default content negotiation with HttpResponseMessage results, you can create the Response instance using the Request.CreateResponse method:var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); This provides you an HttpResponse object that's pre-configured with the default formatter based on Content Negotiation. Once you have an HttpResponse object you can easily control most HTTP aspects on this object. What's sweet here is that there are many more detailed properties on HttpResponse than the core ASP.NET Response object, with most options being explicitly configurable with enumerations that make it easy to pick the right headers and response codes from a list of valid codes. It makes HTTP features available much more discoverable even for non-hardcore REST/HTTP geeks. Non-Serialized Results The output returned doesn’t have to be a serialized value but can also be raw data, like strings, binary data or streams. You can use the HttpResponseMessage.Content object to set a number of common Content classes. Listing 5 shows how to return a binary image using the ByteArrayContent class from a Controller method. [HttpGet] public HttpResponseMessage AlbumArt(string title) { var album = AlbumData.Current.FirstOrDefault(abl => abl.AlbumName.StartsWith(title)); if (album == null) { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found")); return resp; } // kinda silly - we would normally serve this directly // but hey - it's a demo. var http = new WebClient(); var imageData = http.DownloadData(album.AlbumImageUrl); // create response and return var result = new HttpResponseMessage(HttpStatusCode.OK); result.Content = new ByteArrayContent(imageData); result.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg"); return result; } The image retrieval from Amazon is contrived, but it shows how to return binary data using ByteArrayContent. It also demonstrates that you can easily return multiple types of content from a single controller method, which is actually quite common. If an error occurs - such as a resource can’t be found or a validation error – you can return an error response to the client that’s very specific to the error. In GetAlbumArt(), if the album can’t be found, we want to return a 404 Not Found status (and realistically no error, as it’s an image). Note that if you are not using HTTP Verb-based routing or not accessing a method that starts with Get/Post etc., you have to specify one or more HTTP Verb attributes on the method explicitly. Here, I used the [HttpGet] attribute to serve the image. Another option to handle the error could be to return a fixed placeholder image if no album could be matched or the album doesn’t have an image. When returning an error code, you can also return a strongly typed response to the client. For example, you can set the 404 status code and also return a custom error object (ApiMessageError is a class I defined) like this:return Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found") );   If the album can be found, the image will be returned. The image is downloaded into a byte[] array, and then assigned to the result’s Content property. I created a new ByteArrayContent instance and assigned the image’s bytes and the content type so that it displays properly in the browser. There are other content classes available: StringContent, StreamContent, ByteArrayContent, MultipartContent, and ObjectContent are at your disposal to return just about any kind of content. You can create your own Content classes if you frequently return custom types and handle the default formatter assignments that should be used to send the data out . Although HttpResponseMessage results require more code than returning a plain .NET value from a method, it allows much more control over the actual HTTP processing than automatic processing. It also makes it much easier to test your controller methods as you get a response object that you can check for specific status codes and output messages rather than just a result value. Routing Again Ok, let’s get back to the image example. Using the original routing we have setup using HTTP Verb routing there's no good way to serve the image. In order to return my album art image I’d like to use a URL like this: http://localhost/aspnetWebApi/albums/Dirty%20Deeds/image In order to create a URL like this, I have to create a new Controller because my earlier routes pointed to the AlbumApiController using HTTP Verb routing. HTTP Verb based routing is great for representing a single set of resources such as albums. You can map operations like add, delete, update and read easily using HTTP Verbs. But you cannot mix action based routing into a an HTTP Verb routing controller - you can only map HTTP Verbs and each method has to be unique based on parameter signature. You can't have multiple GET operations to methods with the same signature. So GetImage(string id) and GetAlbum(string title) are in conflict in an HTTP GET routing scenario. In fact, I was unable to make the above Image URL work with any combination of HTTP Verb plus Custom routing using the single Albums controller. There are number of ways around this, but all involve additional controllers.  Personally, I think it’s easier to use explicit Action routing and then add custom routes if you need to simplify your URLs further. So in order to accommodate some of the other examples, I created another controller – AlbumRpcApiController – to handle all requests that are explicitly routed via actions (/albums/rpc/AlbumArt) or are custom routed with explicit routes defined in the HttpConfiguration. I added the AlbumArt() method to this new AlbumRpcApiController class. For the image URL to work with the new AlbumRpcApiController, you need a custom route placed before the default route from Listing 1.RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); Now I can use either of the following URLs to access the image: Custom route: (/albums/rpc/{title}/image)http://localhost/aspnetWebApi/albums/PowerAge/image Action route: (/albums/rpc/action/{title})http://localhost/aspnetWebAPI/albums/rpc/albumart/PowerAge Sending Data to the Server To send data to the server and add a new album, you can use an HTTP POST operation. Since I’m using HTTP Verb-based routing in the original AlbumApiController, I can implement a method called PostAlbum()to accept a new album from the client. Listing 6 shows the Web API code to add a new album.public HttpResponseMessage PostAlbum(Album album) { if (!this.ModelState.IsValid) { // my custom error class var error = new ApiMessageError() { message = "Model is invalid" }; // add errors into our client error model for client foreach (var prop in ModelState.Values) { var modelError = prop.Errors.FirstOrDefault(); if (!string.IsNullOrEmpty(modelError.ErrorMessage)) error.errors.Add(modelError.ErrorMessage); else error.errors.Add(modelError.Exception.Message); } return Request.CreateResponse<ApiMessageError>(HttpStatusCode.Conflict, error); } // update song id which isn't provided foreach (var song in album.Songs) song.AlbumId = album.Id; // see if album exists already var matchedAlbum = AlbumData.Current .SingleOrDefault(alb => alb.Id == album.Id || alb.AlbumName == album.AlbumName); if (matchedAlbum == null) AlbumData.Current.Add(album); else matchedAlbum = album; // return a string to show that the value got here var resp = Request.CreateResponse(HttpStatusCode.OK, string.Empty); resp.Content = new StringContent(album.AlbumName + " " + album.Entered.ToString(), Encoding.UTF8, "text/plain"); return resp; } The PostAlbum() method receives an album parameter, which is automatically deserialized from the POST buffer the client sent. The data passed from the client can be either XML or JSON. Web API automatically figures out what format it needs to deserialize based on the content type and binds the content to the album object. Web API uses model binding to bind the request content to the parameter(s) of controller methods. Like MVC you can check the model by looking at ModelState.IsValid. If it’s not valid, you can run through the ModelState.Values collection and check each binding for errors. Here I collect the error messages into a string array that gets passed back to the client via the result ApiErrorMessage object. When a binding error occurs, you’ll want to return an HTTP error response and it’s best to do that with an HttpResponseMessage result. In Listing 6, I used a custom error class that holds a message and an array of detailed error messages for each binding error. I used this object as the content to return to the client along with my Conflict HTTP Status Code response. If binding succeeds, the example returns a string with the name and date entered to demonstrate that you captured the data. Normally, a method like this should return a Boolean or no response at all (HttpStatusCode.NoConent). The sample uses a simple static list to hold albums, so once you’ve added the album using the Post operation, you can hit the /albums/ URL to see that the new album was added. The client jQuery code to call the POST operation from the client with jQuery is shown in Listing 7. var id = new Date().getTime().toString(); var album = { "Id": id, "AlbumName": "Power Age", "Artist": "AC/DC", "YearReleased": 1977, "Entered": "2002-03-11T18:24:43.5580794-10:00", "AlbumImageUrl": http://ecx.images-amazon.com/images/…, "AmazonUrl": http://www.amazon.com/…, "Songs": [ { "SongName": "Rock 'n Roll Damnation", "SongLength": 3.12}, { "SongName": "Downpayment Blues", "SongLength": 4.22 }, { "SongName": "Riff Raff", "SongLength": 2.42 } ] } $.ajax( { url: "albums/", type: "POST", contentType: "application/json", data: JSON.stringify(album), processData: false, beforeSend: function (xhr) { // not required since JSON is default output xhr.setRequestHeader("Accept", "application/json"); }, success: function (result) { // reload list of albums page.loadAlbums(); }, error: function (xhr, status, p3, p4) { var err = "Error"; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); The code in Listing 7 creates an album object in JavaScript to match the structure of the .NET Album class. This object is passed to the $.ajax() function to send to the server as POST. The data is turned into JSON and the content type set to application/json so that the server knows what to convert when deserializing in the Album instance. The jQuery code hooks up success and failure events. Success returns the result data, which is a string that’s echoed back with an alert box. If an error occurs, jQuery returns the XHR instance and status code. You can check the XHR to see if a JSON object is embedded and if it is, you can extract it by de-serializing it and accessing the .message property. REST standards suggest that updates to existing resources should use PUT operations. REST standards aside, I’m not a big fan of separating out inserts and updates so I tend to have a single method that handles both. But if you want to follow REST suggestions, you can create a PUT method that handles updates by forwarding the PUT operation to the POST method:public HttpResponseMessage PutAlbum(Album album) { return PostAlbum(album); } To make the corresponding $.ajax() call, all you have to change from Listing 7 is the type: from POST to PUT. Model Binding with UrlEncoded POST Variables In the example in Listing 7 I used JSON objects to post a serialized object to a server method that accepted an strongly typed object with the same structure, which is a common way to send data to the server. However, Web API supports a number of different ways that data can be received by server methods. For example, another common way is to use plain UrlEncoded POST  values to send to the server. Web API supports Model Binding that works similar (but not the same) as MVC's model binding where POST variables are mapped to properties of object parameters of the target method. This is actually quite common for AJAX calls that want to avoid serialization and the potential requirement of a JSON parser on older browsers. For example, using jQUery you might use the $.post() method to send a new album to the server (albeit one without songs) using code like the following:$.post("albums/",{AlbumName: "Dirty Deeds", YearReleased: 1976 … },albumPostCallback); Although the code looks very similar to the client code we used before passing JSON, here the data passed is URL encoded values (AlbumName=Dirty+Deeds&YearReleased=1976 etc.). Web API then takes this POST data and maps each of the POST values to the properties of the Album object in the method's parameter. Although the client code is different the server can both handle the JSON object, or the UrlEncoded POST values. Dynamic Access to POST Data There are also a few options available to dynamically access POST data, if you know what type of data you're dealing with. If you have POST UrlEncoded values, you can dynamically using a FormsDataCollection:[HttpPost] public string PostAlbum(FormDataCollection form) { return string.Format("{0} - released {1}", form.Get("AlbumName"),form.Get("RearReleased")); } The FormDataCollection is a very simple object, that essentially provides the same functionality as Request.Form[] in ASP.NET. Request.Form[] still works if you're running hosted in an ASP.NET application. However as a general rule, while ASP.NET's functionality is always available when running Web API hosted inside of an  ASP.NET application, using the built in classes specific to Web API makes it possible to run Web API applications in a self hosted environment outside of ASP.NET. If your client is sending JSON to your server, and you don't want to map the JSON to a strongly typed object because you only want to retrieve a few simple values, you can also accept a JObject parameter in your API methods:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } There quite a few options available to you to receive data with Web API, which gives you more choices for the right tool for the job. Unfortunately one shortcoming of Web API is that POST data is always mapped to a single parameter. This means you can't pass multiple POST parameters to methods that receive POST data. It's possible to accept multiple parameters, but only one can map to the POST content - the others have to come from the query string or route values. I have a couple of Blog POSTs that explain what works and what doesn't here: Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API   Handling Delete Operations Finally, to round out the server API code of the album example we've been discussin, here’s the DELETE verb controller method that allows removal of an album by its title:public HttpResponseMessage DeleteAlbum(string title) { var matchedAlbum = AlbumData.Current.Where(alb => alb.AlbumName == title) .SingleOrDefault(); if (matchedAlbum == null) return new HttpResponseMessage(HttpStatusCode.NotFound); AlbumData.Current.Remove(matchedAlbum); return new HttpResponseMessage(HttpStatusCode.NoContent); } To call this action method using jQuery, you can use:$(".removeimage").live("click", function () { var $el = $(this).parent(".album"); var txt = $el.find("a").text(); $.ajax({ url: "albums/" + encodeURIComponent(txt), type: "Delete", success: function (result) { $el.fadeOut().remove(); }, error: jqError }); }   Note the use of the DELETE verb in the $.ajax() call, which routes to DeleteAlbum on the server. DELETE is a non-content operation, so you supply a resource ID (the title) via route value or the querystring. Routing Conflicts In all requests with the exception of the AlbumArt image example shown so far, I used HTTP Verb routing that I set up in Listing 1. HTTP Verb Routing is a recommendation that is in line with typical REST access to HTTP resources. However, it takes quite a bit of effort to create REST-compliant API implementations based only on HTTP Verb routing only. You saw one example that didn’t really fit – the return of an image where I created a custom route albums/{title}/image that required creation of a second controller and a custom route to work. HTTP Verb routing to a controller does not mix with custom or action routing to the same controller because of the limited mapping of HTTP verbs imposed by HTTP Verb routing. To understand some of the problems with verb routing, let’s look at another example. Let’s say you create a GetSortableAlbums() method like this and add it to the original AlbumApiController accessed via HTTP Verb routing:[HttpGet] public IQueryable<Album> SortableAlbums() { var albums = AlbumData.Current; // generally should be done only on actual queryable results (EF etc.) // Done here because we're running with a static list but otherwise might be slow return albums.AsQueryable(); } If you compile this code and try to now access the /albums/ link, you get an error: Multiple Actions were found that match the request. HTTP Verb routing only allows access to one GET operation per parameter/route value match. If more than one method exists with the same parameter signature, it doesn’t work. As I mentioned earlier for the image display, the only solution to get this method to work is to throw it into another controller. Because I already set up the AlbumRpcApiController I can add the method there. First, I should rename the method to SortableAlbums() so I’m not using a Get prefix for the method. This also makes the action parameter look cleaner in the URL - it looks less like a method and more like a noun. I can then create a new route that handles direct-action mapping:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); As I am explicitly adding a route segment – rpc – into the route template, I can now reference explicit methods in the Web API controller using URLs like this: http://localhost/AspNetWebApi/rpc/SortableAlbums Error Handling I’ve already done some minimal error handling in the examples. For example in Listing 6, I detected some known-error scenarios like model validation failing or a resource not being found and returning an appropriate HttpResponseMessage result. But what happens if your code just blows up or causes an exception? If you have a controller method, like this:[HttpGet] public void ThrowException() { throw new UnauthorizedAccessException("Unauthorized Access Sucka"); } You can call it with this: http://localhost/AspNetWebApi/albums/rpc/ThrowException The default exception handling displays a 500-status response with the serialized exception on the local computer only. When you connect from a remote computer, Web API throws back a 500  HTTP Error with no data returned (IIS then adds its HTML error page). The behavior is configurable in the GlobalConfiguration:GlobalConfiguration .Configuration .IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Never; If you want more control over your error responses sent from code, you can throw explicit error responses yourself using HttpResponseException. When you throw an HttpResponseException the response parameter is used to generate the output for the Controller action. [HttpGet] public void ThrowError() { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.BadRequest, new ApiMessageError("Your code stinks!")); throw new HttpResponseException(resp); } Throwing an HttpResponseException stops the processing of the controller method and immediately returns the response you passed to the exception. Unlike other Exceptions fired inside of WebAPI, HttpResponseException bypasses the Exception Filters installed and instead just outputs the response you provide. In this case, the serialized ApiMessageError result string is returned in the default serialization format – XML or JSON. You can pass any content to HttpResponseMessage, which includes creating your own exception objects and consistently returning error messages to the client. Here’s a small helper method on the controller that you might use to send exception info back to the client consistently:private void ThrowSafeException(string message, HttpStatusCode statusCode = HttpStatusCode.BadRequest) { var errResponse = Request.CreateResponse<ApiMessageError>(statusCode, new ApiMessageError() { message = message }); throw new HttpResponseException(errResponse); } You can then use it to output any captured errors from code:[HttpGet] public void ThrowErrorSafe() { try { List<string> list = null; list.Add("Rick"); } catch (Exception ex) { ThrowSafeException(ex.Message); } }   Exception Filters Another more global solution is to create an Exception Filter. Filters in Web API provide the ability to pre- and post-process controller method operations. An exception filter looks at all exceptions fired and then optionally creates an HttpResponseMessage result. Listing 8 shows an example of a basic Exception filter implementation.public class UnhandledExceptionFilter : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { HttpStatusCode status = HttpStatusCode.InternalServerError; var exType = context.Exception.GetType(); if (exType == typeof(UnauthorizedAccessException)) status = HttpStatusCode.Unauthorized; else if (exType == typeof(ArgumentException)) status = HttpStatusCode.NotFound; var apiError = new ApiMessageError() { message = context.Exception.Message }; // create a new response and attach our ApiError object // which now gets returned on ANY exception result var errorResponse = context.Request.CreateResponse<ApiMessageError>(status, apiError); context.Response = errorResponse; base.OnException(context); } } Exception Filter Attributes can be assigned to an ApiController class like this:[UnhandledExceptionFilter] public class AlbumRpcApiController : ApiController or you can globally assign it to all controllers by adding it to the HTTP Configuration's Filters collection:GlobalConfiguration.Configuration.Filters.Add(new UnhandledExceptionFilter()); The latter is a great way to get global error trapping so that all errors (short of hard IIS errors and explicit HttpResponseException errors) return a valid error response that includes error information in the form of a known-error object. Using a filter like this allows you to throw an exception as you normally would and have your filter create a response in the appropriate output format that the client expects. For example, an AJAX application can on failure expect to see a JSON error result that corresponds to the real error that occurred rather than a 500 error along with HTML error page that IIS throws up. You can even create some custom exceptions so you can differentiate your own exceptions from unhandled system exceptions - you often don't want to display error information from 'unknown' exceptions as they may contain sensitive system information or info that's not generally useful to users of your application/site. This is just one example of how ASP.NET Web API is configurable and extensible. Exception filters are just one example of how you can plug-in into the Web API request flow to modify output. Many more hooks exist and I’ll take a closer look at extensibility in Part 2 of this article in the future. Summary Web API is a big improvement over previous Microsoft REST and AJAX toolkits. The key features to its usefulness are its ease of use with simple controller based logic, familiar MVC-style routing, low configuration impact, extensibility at all levels and tight attention to exposing and making HTTP semantics easily discoverable and easy to use. Although none of the concepts used in Web API are new or radical, Web API combines the best of previous platforms into a single framework that’s highly functional, easy to work with, and extensible to boot. I think that Microsoft has hit a home run with Web API. Related Resources Where does ASP.NET Web API fit? Sample Source Code on GitHub Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API Creating a JSONP Formatter for ASP.NET Web API Removing the XML Formatter from ASP.NET Web API Applications© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Is there a Telecommunications Reference Architecture?

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Abstract   Reference architecture provides needed architectural information that can be provided in advance to an enterprise to enable consistent architectural best practices. Enterprise Reference Architecture helps business owners to actualize their strategies, vision, objectives, and principles. It evaluates the IT systems, based on Reference Architecture goals, principles, and standards. It helps to reduce IT costs by increasing functionality, availability, scalability, etc. Telecom Reference Architecture provides customers with the flexibility to view bundled service bills online with the provision of multiple services. It provides real-time, flexible billing and charging systems, to handle complex promotions, discounts, and settlements with multiple parties. This paper attempts to describe the Reference Architecture for the Telecom Enterprises. It lays the foundation for a Telecom Reference Architecture by articulating the requirements, drivers, and pitfalls for telecom service providers. It describes generic reference architecture for telecom enterprises and moves on to explain how to achieve Enterprise Reference Architecture by using SOA.   Introduction   A Reference Architecture provides a methodology, set of practices, template, and standards based on a set of successful solutions implemented earlier. These solutions have been generalized and structured for the depiction of both a logical and a physical architecture, based on the harvesting of a set of patterns that describe observations in a number of successful implementations. It helps as a reference for the various architectures that an enterprise can implement to solve various problems. It can be used as the starting point or the point of comparisons for various departments/business entities of a company, or for the various companies for an enterprise. It provides multiple views for multiple stakeholders.   Major artifacts of the Enterprise Reference Architecture are methodologies, standards, metadata, documents, design patterns, etc.   Purpose of Reference Architecture   In most cases, architects spend a lot of time researching, investigating, defining, and re-arguing architectural decisions. It is like reinventing the wheel as their peers in other organizations or even the same organization have already spent a lot of time and effort defining their own architectural practices. This prevents an organization from learning from its own experiences and applying that knowledge for increased effectiveness.   Reference architecture provides missing architectural information that can be provided in advance to project team members to enable consistent architectural best practices.   Enterprise Reference Architecture helps an enterprise to achieve the following at the abstract level:   ·       Reference architecture is more of a communication channel to an enterprise ·       Helps the business owners to accommodate to their strategies, vision, objectives, and principles. ·       Evaluates the IT systems based on Reference Architecture Principles ·       Reduces IT spending through increasing functionality, availability, scalability, etc ·       A Real-time Integration Model helps to reduce the latency of the data updates Is used to define a single source of Information ·       Provides a clear view on how to manage information and security ·       Defines the policy around the data ownership, product boundaries, etc. ·       Helps with cost optimization across project and solution portfolios by eliminating unused or duplicate investments and assets ·       Has a shorter implementation time and cost   Once the reference architecture is in place, the set of architectural principles, standards, reference models, and best practices ensure that the aligned investments have the greatest possible likelihood of success in both the near term and the long term (TCO).     Common pitfalls for Telecom Service Providers   Telecom Reference Architecture serves as the first step towards maturity for a telecom service provider. During the course of our assignments/experiences with telecom players, we have come across the following observations – Some of these indicate a lack of maturity of the telecom service provider:   ·       In markets that are growing and not so mature, it has been observed that telcos have a significant amount of in-house or home-grown applications. In some of these markets, the growth has been so rapid that IT has been unable to cope with business demands. Telcos have shown a tendency to come up with workarounds in their IT applications so as to meet business needs. ·       Even for core functions like provisioning or mediation, some telcos have tried to manage with home-grown applications. ·       Most of the applications do not have the required scalability or maintainability to sustain growth in volumes or functionality. ·       Applications face interoperability issues with other applications in the operator's landscape. Integrating a new application or network element requires considerable effort on the part of the other applications. ·       Application boundaries are not clear, and functionality that is not in the initial scope of that application gets pushed onto it. This results in the development of the multiple, small applications without proper boundaries. ·       Usage of Legacy OSS/BSS systems, poor Integration across Multiple COTS Products and Internal Systems. Most of the Integrations are developed on ad-hoc basis and Point-to-Point Integration. ·       Redundancy of the business functions in different applications • Fragmented data across the different applications and no integrated view of the strategic data • Lot of performance Issues due to the usage of the complex integration across OSS and BSS systems   However, this is where the maturity of the telecom industry as a whole can be of help. The collaborative efforts of telcos to overcome some of these problems have resulted in bodies like the TM Forum. They have come up with frameworks for business processes, data, applications, and technology for telecom service providers. These could be a good starting point for telcos to clean up their enterprise landscape.   Industry Trends in Telecom Reference Architecture   Telecom reference architectures are evolving rapidly because telcos are facing business and IT challenges.   “The reality is that there probably is no killer application, no silver bullet that the telcos can latch onto to carry them into a 21st Century.... Instead, there are probably hundreds – perhaps thousands – of niche applications.... And the only way to find which of these works for you is to try out lots of them, ramp up the ones that work, and discontinue the ones that fail.” – Martin Creaner President & CTO TM Forum.   The following trends have been observed in telecom reference architecture:   ·       Transformation of business structures to align with customer requirements ·       Adoption of more Internet-like technical architectures. The Web 2.0 concept is increasingly being used. ·       Virtualization of the traditional operations support system (OSS) ·       Adoption of SOA to support development of IP-based services ·       Adoption of frameworks like Service Delivery Platforms (SDPs) and IP Multimedia Subsystem ·       (IMS) to enable seamless deployment of various services over fixed and mobile networks ·       Replacement of in-house, customized, and stove-piped OSS/BSS with standards-based COTS products ·       Compliance with industry standards and frameworks like eTOM, SID, and TAM to enable seamless integration with other standards-based products   Drivers of Reference Architecture   The drivers of the Reference Architecture are Reference Architecture Goals, Principles, and Enterprise Vision and Telecom Transformation. The details are depicted below diagram. @font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }div.Section1 { page: Section1; } Figure 1. Drivers for Reference Architecture @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Today’s telecom reference architectures should seamlessly integrate traditional legacy-based applications and transition to next-generation network technologies (e.g., IP multimedia subsystems). This has resulted in new requirements for flexible, real-time billing and OSS/BSS systems and implications on the service provider’s organizational requirements and structure.   Telecom reference architectures are today expected to:   ·       Integrate voice, messaging, email and other VAS over fixed and mobile networks, back end systems ·       Be able to provision multiple services and service bundles • Deliver converged voice, video and data services ·       Leverage the existing Network Infrastructure ·       Provide real-time, flexible billing and charging systems to handle complex promotions, discounts, and settlements with multiple parties. ·       Support charging of advanced data services such as VoIP, On-Demand, Services (e.g.  Video), IMS/SIP Services, Mobile Money, Content Services and IPTV. ·       Help in faster deployment of new services • Serve as an effective platform for collaboration between network IT and business organizations ·       Harness the potential of converging technology, networks, devices and content to develop multimedia services and solutions of ever-increasing sophistication on a single Internet Protocol (IP) ·       Ensure better service delivery and zero revenue leakage through real-time balance and credit management ·       Lower operating costs to drive profitability   Enterprise Reference Architecture   The Enterprise Reference Architecture (RA) fills the gap between the concepts and vocabulary defined by the reference model and the implementation. Reference architecture provides detailed architectural information in a common format such that solutions can be repeatedly designed and deployed in a consistent, high-quality, supportable fashion. This paper attempts to describe the Reference Architecture for the Telecom Application Usage and how to achieve the Enterprise Level Reference Architecture using SOA.   • Telecom Reference Architecture • Enterprise SOA based Reference Architecture   Telecom Reference Architecture   Tele Management Forum’s New Generation Operations Systems and Software (NGOSS) is an architectural framework for organizing, integrating, and implementing telecom systems. NGOSS is a component-based framework consisting of the following elements:   ·       The enhanced Telecom Operations Map (eTOM) is a business process framework. ·       The Shared Information Data (SID) model provides a comprehensive information framework that may be specialized for the needs of a particular organization. ·       The Telecom Application Map (TAM) is an application framework to depict the functional footprint of applications, relative to the horizontal processes within eTOM. ·       The Technology Neutral Architecture (TNA) is an integrated framework. TNA is an architecture that is sustainable through technology changes.   NGOSS Architecture Standards are:   ·       Centralized data ·       Loosely coupled distributed systems ·       Application components/re-use  ·       A technology-neutral system framework with technology specific implementations ·       Interoperability to service provider data/processes ·       Allows more re-use of business components across multiple business scenarios ·       Workflow automation   The traditional operator systems architecture consists of four layers,   ·       Business Support System (BSS) layer, with focus toward customers and business partners. Manages order, subscriber, pricing, rating, and billing information. ·       Operations Support System (OSS) layer, built around product, service, and resource inventories. ·       Networks layer – consists of Network elements and 3rd Party Systems. ·       Integration Layer – to maximize application communication and overall solution flexibility.   Reference architecture for telecom enterprises is depicted below. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 2. Telecom Reference Architecture   The major building blocks of any Telecom Service Provider architecture are as follows:   1. Customer Relationship Management   CRM encompasses the end-to-end lifecycle of the customer: customer initiation/acquisition, sales, ordering, and service activation, customer care and support, proactive campaigns, cross sell/up sell, and retention/loyalty.   CRM also includes the collection of customer information and its application to personalize, customize, and integrate delivery of service to a customer, as well as to identify opportunities for increasing the value of the customer to the enterprise.   The key functionalities related to Customer Relationship Management are   ·       Manage the end-to-end lifecycle of a customer request for products. ·       Create and manage customer profiles. ·       Manage all interactions with customers – inquiries, requests, and responses. ·       Provide updates to Billing and other south bound systems on customer/account related updates such as customer/ account creation, deletion, modification, request bills, final bill, duplicate bills, credit limits through Middleware. ·       Work with Order Management System, Product, and Service Management components within CRM. ·       Manage customer preferences – Involve all the touch points and channels to the customer, including contact center, retail stores, dealers, self service, and field service, as well as via any media (phone, face to face, web, mobile device, chat, email, SMS, mail, the customer's bill, etc.). ·       Support single interface for customer contact details, preferences, account details, offers, customer premise equipment, bill details, bill cycle details, and customer interactions.   CRM applications interact with customers through customer touch points like portals, point-of-sale terminals, interactive voice response systems, etc. The requests by customers are sent via fulfillment/provisioning to billing system for ordering processing.   2. Billing and Revenue Management   Billing and Revenue Management handles the collection of appropriate usage records and production of timely and accurate bills – for providing pre-bill usage information and billing to customers; for processing their payments; and for performing payment collections. In addition, it handles customer inquiries about bills, provides billing inquiry status, and is responsible for resolving billing problems to the customer's satisfaction in a timely manner. This process grouping also supports prepayment for services.   The key functionalities provided by these applications are   ·       To ensure that enterprise revenue is billed and invoices delivered appropriately to customers. ·       To manage customers’ billing accounts, process their payments, perform payment collections, and monitor the status of the account balance. ·       To ensure the timely and effective fulfillment of all customer bill inquiries and complaints. ·       Collect the usage records from mediation and ensure appropriate rating and discounting of all usage and pricing. ·       Support revenue sharing; split charging where usage is guided to an account different from the service consumer. ·       Support prepaid and post-paid rating. ·       Send notification on approach / exceeding the usage thresholds as enforced by the subscribed offer, and / or as setup by the customer. ·       Support prepaid, post paid, and hybrid (where some services are prepaid and the rest of the services post paid) customers and conversion from post paid to prepaid, and vice versa. ·       Support different billing function requirements like charge prorating, promotion, discount, adjustment, waiver, write-off, account receivable, GL Interface, late payment fee, credit control, dunning, account or service suspension, re-activation, expiry, termination, contract violation penalty, etc. ·       Initiate direct debit to collect payment against an invoice outstanding. ·       Send notification to Middleware on different events; for example, payment receipt, pre-suspension, threshold exceed, etc.   Billing systems typically get usage data from mediation systems for rating and billing. They get provisioning requests from order management systems and inquiries from CRM systems. Convergent and real-time billing systems can directly get usage details from network elements.   3. Mediation   Mediation systems transform/translate the Raw or Native Usage Data Records into a general format that is acceptable to billing for their rating purposes.   The following lists the high-level roles and responsibilities executed by the Mediation system in the end-to-end solution.   ·       Collect Usage Data Records from different data sources – like network elements, routers, servers – via different protocol and interfaces. ·       Process Usage Data Records – Mediation will process Usage Data Records as per the source format. ·       Validate Usage Data Records from each source. ·       Segregates Usage Data Records coming from each source to multiple, based on the segregation requirement of end Application. ·       Aggregates Usage Data Records based on the aggregation rule if any from different sources. ·       Consolidates multiple Usage Data Records from each source. ·       Delivers formatted Usage Data Records to different end application like Billing, Interconnect, Fraud Management, etc. ·       Generates audit trail for incoming Usage Data Records and keeps track of all the Usage Data Records at various stages of mediation process. ·       Checks duplicate Usage Data Records across files for a given time window.   4. Fulfillment   This area is responsible for providing customers with their requested products in a timely and correct manner. It translates the customer's business or personal need into a solution that can be delivered using the specific products in the enterprise's portfolio. This process informs the customers of the status of their purchase order, and ensures completion on time, as well as ensuring a delighted customer. These processes are responsible for accepting and issuing orders. They deal with pre-order feasibility determination, credit authorization, order issuance, order status and tracking, customer update on customer order activities, and customer notification on order completion. Order management and provisioning applications fall into this category.   The key functionalities provided by these applications are   ·       Issuing new customer orders, modifying open customer orders, or canceling open customer orders; ·       Verifying whether specific non-standard offerings sought by customers are feasible and supportable; ·       Checking the credit worthiness of customers as part of the customer order process; ·       Testing the completed offering to ensure it is working correctly; ·       Updating of the Customer Inventory Database to reflect that the specific product offering has been allocated, modified, or cancelled; ·       Assigning and tracking customer provisioning activities; ·       Managing customer provisioning jeopardy conditions; and ·       Reporting progress on customer orders and other processes to customer.   These applications typically get orders from CRM systems. They interact with network elements and billing systems for fulfillment of orders.   5. Enterprise Management   This process area includes those processes that manage enterprise-wide activities and needs, or have application within the enterprise as a whole. They encompass all business management processes that   ·       Are necessary to support the whole of the enterprise, including processes for financial management, legal management, regulatory management, process, cost, and quality management, etc.;   ·       Are responsible for setting corporate policies, strategies, and directions, and for providing guidelines and targets for the whole of the business, including strategy development and planning for areas, such as Enterprise Architecture, that are integral to the direction and development of the business;   ·       Occur throughout the enterprise, including processes for project management, performance assessments, cost assessments, etc.     (i) Enterprise Risk Management:   Enterprise Risk Management focuses on assuring that risks and threats to the enterprise value and/or reputation are identified, and appropriate controls are in place to minimize or eliminate the identified risks. The identified risks may be physical or logical/virtual. Successful risk management ensures that the enterprise can support its mission critical operations, processes, applications, and communications in the face of serious incidents such as security threats/violations and fraud attempts. Two key areas covered in Risk Management by telecom operators are:   ·       Revenue Assurance: Revenue assurance system will be responsible for identifying revenue loss scenarios across components/systems, and will help in rectifying the problems. The following lists the high-level roles and responsibilities executed by the Revenue Assurance system in the end-to-end solution. o   Identify all usage information dropped when networks are being upgraded. o   Interconnect bill verification. o   Identify where services are routinely provisioned but never billed. o   Identify poor sales policies that are intensifying collections problems. o   Find leakage where usage is sent to error bucket and never billed for. o   Find leakage where field service, CRM, and network build-out are not optimized.   ·       Fraud Management: Involves collecting data from different systems to identify abnormalities in traffic patterns, usage patterns, and subscription patterns to report suspicious activity that might suggest fraudulent usage of resources, resulting in revenue losses to the operator.   The key roles and responsibilities of the system component are as follows:   o   Fraud management system will capture and monitor high usage (over a certain threshold) in terms of duration, value, and number of calls for each subscriber. The threshold for each subscriber is decided by the system and fixed automatically. o   Fraud management will be able to detect the unauthorized access to services for certain subscribers. These subscribers may have been provided unauthorized services by employees. The component will raise the alert to the operator the very first time of such illegal calls or calls which are not billed. o   The solution will be to have an alarm management system that will deliver alarms to the operator/provider whenever it detects a fraud, thus minimizing fraud by catching it the first time it occurs. o   The Fraud Management system will be capable of interfacing with switches, mediation systems, and billing systems   (ii) Knowledge Management   This process focuses on knowledge management, technology research within the enterprise, and the evaluation of potential technology acquisitions.   Key responsibilities of knowledge base management are to   ·       Maintain knowledge base – Creation and updating of knowledge base on ongoing basis. ·       Search knowledge base – Search of knowledge base on keywords or category browse ·       Maintain metadata – Management of metadata on knowledge base to ensure effective management and search. ·       Run report generator. ·       Provide content – Add content to the knowledge base, e.g., user guides, operational manual, etc.   (iii) Document Management   It focuses on maintaining a repository of all electronic documents or images of paper documents relevant to the enterprise using a system.   (iv) Data Management   It manages data as a valuable resource for any enterprise. For telecom enterprises, the typical areas covered are Master Data Management, Data Warehousing, and Business Intelligence. It is also responsible for data governance, security, quality, and database management.   Key responsibilities of Data Management are   ·       Using ETL, extract the data from CRM, Billing, web content, ERP, campaign management, financial, network operations, asset management info, customer contact data, customer measures, benchmarks, process data, e.g., process inputs, outputs, and measures, into Enterprise Data Warehouse. ·       Management of data traceability with source, data related business rules/decisions, data quality, data cleansing data reconciliation, competitors data – storage for all the enterprise data (customer profiles, products, offers, revenues, etc.) ·       Get online update through night time replication or physical backup process at regular frequency. ·       Provide the data access to business intelligence and other systems for their analysis, report generation, and use.   (v) Business Intelligence   It uses the Enterprise Data to provide the various analysis and reports that contain prospects and analytics for customer retention, acquisition of new customers due to the offers, and SLAs. It will generate right and optimized plans – bolt-ons for the customers.   The following lists the high-level roles and responsibilities executed by the Business Intelligence system at the Enterprise Level:   ·       It will do Pattern analysis and reports problem. ·       It will do Data Analysis – Statistical analysis, data profiling, affinity analysis of data, customer segment wise usage patterns on offers, products, service and revenue generation against services and customer segments. ·       It will do Performance (business, system, and forecast) analysis, churn propensity, response time, and SLAs analysis. ·       It will support for online and offline analysis, and report drill down capability. ·       It will collect, store, and report various SLA data. ·       It will provide the necessary intelligence for marketing and working on campaigns, etc., with cost benefit analysis and predictions.   It will advise on customer promotions with additional services based on loyalty and credit history of customer   ·       It will Interface with Enterprise Data Management system for data to run reports and analysis tasks. It will interface with the campaign schedules, based on historical success evidence.   (vi) Stakeholder and External Relations Management   It manages the enterprise's relationship with stakeholders and outside entities. Stakeholders include shareholders, employee organizations, etc. Outside entities include regulators, local community, and unions. Some of the processes within this grouping are Shareholder Relations, External Affairs, Labor Relations, and Public Relations.   (vii) Enterprise Resource Planning   It is used to manage internal and external resources, including tangible assets, financial resources, materials, and human resources. Its purpose is to facilitate the flow of information between all business functions inside the boundaries of the enterprise and manage the connections to outside stakeholders. ERP systems consolidate all business operations into a uniform and enterprise wide system environment.   The key roles and responsibilities for Enterprise System are given below:   ·        It will handle responsibilities such as core accounting, financial, and management reporting. ·       It will interface with CRM for capturing customer account and details. ·       It will interface with billing to capture the billing revenue and other financial data. ·       It will be responsible for executing the dunning process. Billing will send the required feed to ERP for execution of dunning. ·       It will interface with the CRM and Billing through batch interfaces. Enterprise management systems are like horizontals in the enterprise and typically interact with all major telecom systems. E.g., an ERP system interacts with CRM, Fulfillment, and Billing systems for different kinds of data exchanges.   6. External Interfaces/Touch Points   The typical external parties are customers, suppliers/partners, employees, shareholders, and other stakeholders. External interactions from/to a Service Provider to other parties can be achieved by a variety of mechanisms, including:   ·       Exchange of emails or faxes ·       Call Centers ·       Web Portals ·       Business-to-Business (B2B) automated transactions   These applications provide an Internet technology driven interface to external parties to undertake a variety of business functions directly for themselves. These can provide fully or partially automated service to external parties through various touch points.   Typical characteristics of these touch points are   ·       Pre-integrated self-service system, including stand-alone web framework or integration front end with a portal engine ·       Self services layer exposing atomic web services/APIs for reuse by multiple systems across the architectural environment ·       Portlets driven connectivity exposing data and services interoperability through a portal engine or web application   These touch points mostly interact with the CRM systems for requests, inquiries, and responses.   7. Middleware   The component will be primarily responsible for integrating the different systems components under a common platform. It should provide a Standards-Based Platform for building Service Oriented Architecture and Composite Applications. The following lists the high-level roles and responsibilities executed by the Middleware component in the end-to-end solution.   ·       As an integration framework, covering to and fro interfaces ·       Provide a web service framework with service registry. ·       Support SOA framework with SOA service registry. ·       Each of the interfaces from / to Middleware to other components would handle data transformation, translation, and mapping of data points. ·       Receive data from the caller / activate and/or forward the data to the recipient system in XML format. ·       Use standard XML for data exchange. ·       Provide the response back to the service/call initiator. ·       Provide a tracking until the response completion. ·       Keep a store transitional data against each call/transaction. ·       Interface through Middleware to get any information that is possible and allowed from the existing systems to enterprise systems; e.g., customer profile and customer history, etc. ·       Provide the data in a common unified format to the SOA calls across systems, and follow the Enterprise Architecture directive. ·       Provide an audit trail for all transactions being handled by the component.   8. Network Elements   The term Network Element means a facility or equipment used in the provision of a telecommunications service. Such terms also includes features, functions, and capabilities that are provided by means of such facility or equipment, including subscriber numbers, databases, signaling systems, and information sufficient for billing and collection or used in the transmission, routing, or other provision of a telecommunications service.   Typical network elements in a GSM network are Home Location Register (HLR), Intelligent Network (IN), Mobile Switching Center (MSC), SMS Center (SMSC), and network elements for other value added services like Push-to-talk (PTT), Ring Back Tone (RBT), etc.   Network elements are invoked when subscribers use their telecom devices for any kind of usage. These elements generate usage data and pass it on to downstream systems like mediation and billing system for rating and billing. They also integrate with provisioning systems for order/service fulfillment.   9. 3rd Party Applications   3rd Party systems are applications like content providers, payment gateways, point of sale terminals, and databases/applications maintained by the Government.   Depending on applicability and the type of functionality provided by 3rd party applications, the integration with different telecom systems like CRM, provisioning, and billing will be done.   10. Service Delivery Platform   A service delivery platform (SDP) provides the architecture for the rapid deployment, provisioning, execution, management, and billing of value added telecom services. SDPs are based on the concept of SOA and layered architecture. They support the delivery of voice, data services, and content in network and device-independent fashion. They allow application developers to aggregate network capabilities, services, and sources of content. SDPs typically contain layers for web services exposure, service application development, and network abstraction.   SOA Reference Architecture   SOA concept is based on the principle of developing reusable business service and building applications by composing those services, instead of building monolithic applications in silos. It’s about bridging the gap between business and IT through a set of business-aligned IT services, using a set of design principles, patterns, and techniques.   In an SOA, resources are made available to participants in a value net, enterprise, line of business (typically spanning multiple applications within an enterprise or across multiple enterprises). It consists of a set of business-aligned IT services that collectively fulfill an organization’s business processes and goals. We can choreograph these services into composite applications and invoke them through standard protocols. SOA, apart from agility and reusability, enables:   ·       The business to specify processes as orchestrations of reusable services ·       Technology agnostic business design, with technology hidden behind service interface ·       A contractual-like interaction between business and IT, based on service SLAs ·       Accountability and governance, better aligned to business services ·       Applications interconnections untangling by allowing access only through service interfaces, reducing the daunting side effects of change ·       Reduced pressure to replace legacy and extended lifetime for legacy applications, through encapsulation in services   ·       A Cloud Computing paradigm, using web services technologies, that makes possible service outsourcing on an on-demand, utility-like, pay-per-usage basis   The following section represents the Reference Architecture of logical view for the Telecom Solution. The new custom built application needs to align with this logical architecture in the long run to achieve EA benefits.   Packaged implementation applications, such as ERP billing applications, need to expose their functions as service providers (as other applications consume) and interact with other applications as service consumers.   COT applications need to expose services through wrappers such as adapters to utilize existing resources and at the same time achieve Enterprise Architecture goal and objectives.   The following are the various layers for Enterprise level deployment of SOA. This diagram captures the abstract view of Enterprise SOA layers and important components of each layer. Layered architecture means decomposition of services such that most interactions occur between adjacent layers. However, there is no strict rule that top layers should not directly communicate with bottom layers.   The diagram below represents the important logical pieces that would result from overall SOA transformation. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 3. Enterprise SOA Reference Architecture 1.          Operational System Layer: This layer consists of all packaged applications like CRM, ERP, custom built applications, COTS based applications like Billing, Revenue Management, Fulfilment, and the Enterprise databases that are essential and contribute directly or indirectly to the Enterprise OSS/BSS Transformation.   ERP holds the data of Asset Lifecycle Management, Supply Chain, and Advanced Procurement and Human Capital Management, etc.   CRM holds the data related to Order, Sales, and Marketing, Customer Care, Partner Relationship Management, Loyalty, etc.   Content Management handles Enterprise Search and Query. Billing application consists of the following components:   ·       Collections Management, Customer Billing Management, Invoices, Real-Time Rating, Discounting, and Applying of Charges ·       Enterprise databases will hold both the application and service data, whether structured or unstructured.   MDM - Master data majorly consists of Customer, Order, Product, and Service Data.     2.          Enterprise Component Layer:   This layer consists of the Application Services and Common Services that are responsible for realizing the functionality and maintaining the QoS of the exposed services. This layer uses container-based technologies such as application servers to implement the components, workload management, high availability, and load balancing.   Application Services: This Service Layer enables application, technology, and database abstraction so that the complex accessing logic is hidden from the other service layers. This is a basic service layer, which exposes application functionalities and data as reusable services. The three types of the Application access services are:   ·       Application Access Service: This Service Layer exposes application level functionalities as a reusable service between BSS to BSS and BSS to OSS integration. This layer is enabled using disparate technology such as Web Service, Integration Servers, and Adaptors, etc.   ·       Data Access Service: This Service Layer exposes application data services as a reusable reference data service. This is done via direct interaction with application data. and provides the federated query.   ·       Network Access Service: This Service Layer exposes provisioning layer as a reusable service from OSS to OSS integration. This integration service emphasizes the need for high performance, stateless process flows, and distributed design.   Common Services encompasses management of structured, semi-structured, and unstructured data such as information services, portal services, interaction services, infrastructure services, and security services, etc.   3.          Integration Layer:   This consists of service infrastructure components like service bus, service gateway for partner integration, service registry, service repository, and BPEL processor. Service bus will carry the service invocation payloads/messages between consumers and providers. The other important functions expected from it are itinerary based routing, distributed caching of routing information, transformations, and all qualities of service for messaging-like reliability, scalability, and availability, etc. Service registry will hold all contracts (wsdl) of services, and it helps developers to locate or discover service during design time or runtime.   • BPEL processor would be useful in orchestrating the services to compose a complex business scenario or process. • Workflow and business rules management are also required to support manual triggering of certain activities within business process. based on the rules setup and also the state machine information. Application, data, and service mediation layer typically forms the overall composite application development framework or SOA Framework.   4.          Business Process Layer: These are typically the intermediate services layer and represent Shared Business Process Services. At Enterprise Level, these services are from Customer Management, Order Management, Billing, Finance, and Asset Management application domains.   5.          Access Layer: This layer consists of portals for Enterprise and provides a single view of Enterprise information management and dashboard services.   6.          Channel Layer: This consists of various devices; applications that form part of extended enterprise; browsers through which users access the applications.   7.          Client Layer: This designates the different types of users accessing the enterprise applications. The type of user typically would be an important factor in determining the level of access to applications.   8.          Vertical pieces like management, monitoring, security, and development cut across all horizontal layers Management and monitoring involves all aspects of SOA-like services, SLAs, and other QoS lifecycle processes for both applications and services surrounding SOA governance.     9.          EA Governance, Reference Architecture, Roadmap, Principles, and Best Practices:   EA Governance is important in terms of providing the overall direction to SOA implementation within the enterprise. This involves board-level involvement, in addition to business and IT executives. At a high level, this involves managing the SOA projects implementation, managing SOA infrastructure, and controlling the entire effort through all fine-tuned IT processes in accordance with COBIT (Control Objectives for Information Technology).   Devising tools and techniques to promote reuse culture, and the SOA way of doing things needs competency centers to be established in addition to training the workforce to take up new roles that are suited to SOA journey.   Conclusions   Reference Architectures can serve as the basis for disparate architecture efforts throughout the organization, even if they use different tools and technologies. Reference architectures provide best practices and approaches in the independent way a vendor deals with technology and standards. Reference Architectures model the abstract architectural elements for an enterprise independent of the technologies, protocols, and products that are used to implement an SOA. Telecom enterprises today are facing significant business and technology challenges due to growing competition, a multitude of services, and convergence. Adopting architectural best practices could go a long way in meeting these challenges. The use of SOA-based architecture for communication to each of the external systems like Billing, CRM, etc., in OSS/BSS system has made the architecture very loosely coupled, with greater flexibility. Any change in the external systems would be absorbed at the Integration Layer without affecting the rest of the ecosystem. The use of a Business Process Management (BPM) tool makes the management and maintenance of the business processes easy, with better performance in terms of lead time, quality, and cost. Since the Architecture is based on standards, it will lower the cost of deploying and managing OSS/BSS applications over their lifecycles.

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Know more about Cache Buffer Handle

    - by Liu Maclean(???)
    ??????«latch free:cache buffer handles???SQL????»?????cache buffer handle latch?????,?????????: “?????pin?buffer header???????buffer handle,??buffer handle?????????cache buffer handles?,??????cache buffer handles??????,???????cache???buffer handles,?????(reserved set)?????????????_db_handles_cached(???5)???,?????????????????SQL??????????????????????,????pin??????,????????handle,?????????5?cached buffer handles???handle????????????????,Oracle?????????????????pin?”????“?buffer,????????????????handle???db_block_buffers/processes,????_cursor_db_buffers_pinned???????cache buffer handles?????,??????,????????????SQL,????cache?buffer handles?????????,??????????????,???????????/?????” ????T.ASKMACLEAN.COM????,??????cache Buffer handle?????: cache buffer handle ??: ------------------------------ | Buffer state object | ------------------------------ | Place to hang the buffer | ------------------------------ | Consistent Get? | ------------------------------ | Proc Owning SO | ------------------------------ | Flags(RIR) | ------------------------------ ???? cache buffer handle SO: 70000046fdfe530, type: 24, owner: 70000041b018630, flag: INIT/-/-/0×00(buffer) (CR) PR: 70000048e92d148 FLG: 0×500000lock rls: 0, class bit: 0kcbbfbp: [BH: 7000001c7f069b0, LINK: 70000046fdfe570]where: kdswh02: kdsgrp, why: 0BH (7000001c7f069b0) file#: 12 rdba: 0×03061612 (12/398866) class: 1 ba: 7000001c70ee000set: 75 blksize: 8192 bsi: 0 set-flg: 0 pwbcnt: 0dbwrid: 2 obj: 66209 objn: 48710 tsn: 6 afn: 12hash: [700000485f12138,700000485f12138] lru: [70000025af67790,700000132f69ee0]lru-flags: hot_bufferckptq: [NULL] fileq: [NULL] objq: [700000114f5dd10,70000028bf5d620]use: [70000046fdfe570,70000046fdfe570] wait: [NULL]st: SCURRENT md: SHR tch: 0flags: affinity_lockLRBA: [0x0.0.0] HSCN: [0xffff.ffffffff] HSUB: [65535]where: kdswh02: kdsgrp, why: 0 # Example:#   (buffer) (CR) PR: 37290 FLG:    0#   kcbbfbp    : [BH: befd8, LINK: 7836c] (WAITING) Buffer handle (X$KCBBF) kernel cache, buffer buffer_handles Query x$kcbbf  – lists all the buffer handles ???? _db_handles             System-wide simultaneous buffer operations ,no of buffer handles_db_handles_cached      Buffer handles cached each process , no of processes  default 5_cursor_db_buffers_pinned  additional number of buffers a cursor can pin at once_session_kept_cursor_pins       Number of cursors pins to keep in a session When a buffer is pinned it is attached to buffer state object. ??? ???????? cache buffer handles latch ? buffer pin???: SESSION A : SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE    10.2.0.5.0      Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> create table test_cbc_handle(t1 int); Table created. SQL> insert into test_cbc_handle values(1); 1 row created. SQL> commit; Commit complete. SQL> select rowid from test_cbc_handle; ROWID ------------------ AAANO6AABAAAQZSAAA SQL> select * from test_cbc_handle where rowid='AAANO6AABAAAQZSAAA';         T1 ----------          1 SQL> select addr,name from v$latch_parent where name='cache buffer handles'; ADDR             NAME ---------------- -------------------------------------------------- 00000000600140A8 cache buffer handles SQL> select to_number('00000000600140A8','xxxxxxxxxxxxxxxxxxxx') from dual; TO_NUMBER('00000000600140A8','XXXXXXXXXXXXXXXXXXXX') ----------------------------------------------------                                           1610694824 ??cache buffer handles????parent latch ??? child latch ???SESSION A hold ??????cache buffer handles parent latch ???? oradebug call kslgetl ??, kslgetl?oracle??get latch??? SQL> oradebug setmypid; Statement processed. SQL> oradebug call kslgetl 1610694824 1; Function returned 1 ?????SESSION B ???: SQL> select * from v$latchholder;        PID        SID LADDR            NAME                                                                   GETS ---------- ---------- ---------------- ---------------------------------------------------------------- ----------         15        141 00000000600140A8 cache buffer handles                                                    119 cache buffer handles latch ???session A hold??,????????acquire cache buffer handle latch SQL> select * from test_cbc_handle where rowid='AAANO6AABAAAQZSAAA';         T1 ----------          1 ?????Server Process?????? read buffer, ????????"_db_handles_cached", ??process?cache 5? cache buffer handle ??"_db_handles_cached"=0,?process????5????cache buffer handle , ???? process ???pin buffer,???hold cache buffer handle latch??????cache buffer handle SQL> alter system set "_db_handles_cached"=0 scope=spfile; System altered. ????? shutdown immediate; startup; session A: SQL> oradebug setmypid; Statement processed. SQL> oradebug call kslgetl 1610694824 1; Function returned 1 session B: select * from test_cbc_handle where rowid='AAANO6AABAAAQZSAAA'; session B hang!! WHY? SQL> oradebug setmypid; Statement processed. SQL> oradebug dump systemstate 266; Statement processed.   SO: 0x11b30b7b0, type: 2, owner: (nil), flag: INIT/-/-/0x00   (process) Oracle pid=22, calls cur/top: (nil)/0x11b453c38, flag: (0) -             int error: 0, call error: 0, sess error: 0, txn error 0   (post info) last post received: 0 0 0               last post received-location: No post               last process to post me: none               last post sent: 0 0 0               last post sent-location: No post               last process posted by me: none     (latch info) wait_event=0 bits=8       holding    (efd=4) 600140a8 cache buffer handles level=3   SO: 0x11b305810, type: 2, owner: (nil), flag: INIT/-/-/0x00   (process) Oracle pid=10, calls cur/top: 0x11b455ac0/0x11b450a58, flag: (0) -             int error: 0, call error: 0, sess error: 0, txn error 0   (post info) last post received: 0 0 0               last post received-location: No post               last process to post me: none               last post sent: 0 0 0               last post sent-location: No post               last process posted by me: none     (latch info) wait_event=0 bits=2         Location from where call was made: kcbzgs:       waiting for 600140a8 cache buffer handles level=3 FBD93353:000019F0    10   162 10005   1 KSL WAIT BEG [latch: cache buffer handles] 1610694824/0x600140a8 125/0x7d 0/0x0 FF936584:00002761    10   144 10005   1 KSL WAIT BEG [latch: cache buffer handles] 1610694824/0x600140a8 125/0x7d 0/0x0 PID=22 holding ??cache buffer handles latch PID=10 ?? cache buffer handles latch, ????"_db_handles_cached"=0 ?? process??????cache buffer handles ??systemstate???? kcbbfbp cache buffer handle??, ?? "_db_handles_cached"=0 ? cache buffer handles latch?hold ?? ????cache buffer handles latch , ??? buffer?pin?????????? session A exit session B: SQL> select * from v$latchholder; no rows selected SQL> insert into test_cbc_handle values(2); 1 row created. SQL> commit; Commit complete. SQL> SQL> select t1,rowid from test_cbc_handle;         T1 ROWID ---------- ------------------          1 AAANPAAABAAAQZSAAA          2 AAANPAAABAAAQZSAAB SQL> select spid,pid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat)); SPID                PID ------------ ---------- 19251                10 ? GDB ? SPID=19215 ?debug , ?? kcbrls ????breakpoint ??? ????release buffer [oracle@vrh8 ~]$ gdb $ORACLE_HOME/bin/oracle 19251 GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-37.el5) Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.  Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /s01/oracle/product/10.2.0.5/db_1/bin/oracle...(no debugging symbols found)...done. Attaching to program: /s01/oracle/product/10.2.0.5/db_1/bin/oracle, process 19251 Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libskgxp10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libskgxp10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libhasgen10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libhasgen10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libskgxn2.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libskgxn2.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libocr10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libocr10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libocrb10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libocrb10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libocrutl10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libocrutl10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libjox10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libjox10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libclsra10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libclsra10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libdbcfg10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libdbcfg10.so Reading symbols from /s01/oracle/product/10.2.0.5/db_1/lib/libnnz10.so...(no debugging symbols found)...done. Loaded symbols for /s01/oracle/product/10.2.0.5/db_1/lib/libnnz10.so Reading symbols from /usr/lib64/libaio.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/libaio.so.1 Reading symbols from /lib64/libdl.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libdl.so.2 Reading symbols from /lib64/libm.so.6...(no debugging symbols found)...done. Loaded symbols for /lib64/libm.so.6 Reading symbols from /lib64/libpthread.so.0...(no debugging symbols found)...done. [Thread debugging using libthread_db enabled] Loaded symbols for /lib64/libpthread.so.0 Reading symbols from /lib64/libnsl.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libnsl.so.1 Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done. Loaded symbols for /lib64/libc.so.6 Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/ld-linux-x86-64.so.2 Reading symbols from /lib64/libnss_files.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libnss_files.so.2 0x00000035c000d940 in __read_nocancel () from /lib64/libpthread.so.0 (gdb) break kcbrls Breakpoint 1 at 0x10e5d24 session B: select * from test_cbc_handle where rowid='AAANPAAABAAAQZSAAA'; select hang !! GDB (gdb) c Continuing. Breakpoint 1, 0x00000000010e5d24 in kcbrls () (gdb) bt #0  0x00000000010e5d24 in kcbrls () #1  0x0000000002e87d25 in qertbFetchByUserRowID () #2  0x00000000030c62b8 in opifch2 () #3  0x00000000032327f0 in kpoal8 () #4  0x00000000013b7c10 in opiodr () #5  0x0000000003c3c9da in ttcpip () #6  0x00000000013b3144 in opitsk () #7  0x00000000013b60ec in opiino () #8  0x00000000013b7c10 in opiodr () #9  0x00000000013a92f8 in opidrv () #10 0x0000000001fa3936 in sou2o () #11 0x000000000072d40b in opimai_real () #12 0x000000000072d35c in main () SQL> oradebug setmypid; Statement processed. SQL> oradebug dump systemstate 266; Statement processed. ?????? kcbbfbp buffer cache handle ?  SO state object ? BH BUFFER HEADER  link???     ----------------------------------------     SO: 0x11b452348, type: 3, owner: 0x11b305810, flag: INIT/-/-/0x00     (call) sess: cur 11b41bd18, rec 0, usr 11b41bd18; depth: 0       ----------------------------------------       SO: 0x1182dc750, type: 24, owner: 0x11b452348, flag: INIT/-/-/0x00       (buffer) (CR) PR: 0x11b305810 FLG: 0x108000       class bit: (nil)       kcbbfbp: [BH: 0xf2fc69f8, LINK: 0x1182dc790]       where: kdswh05: kdsgrp, why: 0       BH (0xf2fc69f8) file#: 1 rdba: 0x00410652 (1/67154) class: 1 ba: 0xf297c000         set: 3 blksize: 8192 bsi: 0 set-flg: 2 pwbcnt: 272         dbwrid: 0 obj: 54208 objn: 54202 tsn: 0 afn: 1         hash: [f2fc47f8,1181f3038] lru: [f2fc6b88,f2fc6968]         obj-flags: object_ckpt_list         ckptq: [1182ecf38,1182ecf38] fileq: [1182ecf58,1182ecf58] objq: [108712a28,108712a28]         use: [1182dc790,1182dc790] wait: [NULL]         st: XCURRENT md: SHR tch: 12         flags: buffer_dirty gotten_in_current_mode block_written_once                 redo_since_read         LRBA: [0xc7.73b.0] HSCN: [0x0.1cbe52] HSUB: [1]         Using State Objects           ----------------------------------------           SO: 0x1182dc750, type: 24, owner: 0x11b452348, flag: INIT/-/-/0x00           (buffer) (CR) PR: 0x11b305810 FLG: 0x108000           class bit: (nil)           kcbbfbp: [BH: 0xf2fc69f8, LINK: 0x1182dc790]           where: kdswh05: kdsgrp, why: 0         buffer tsn: 0 rdba: 0x00410652 (1/67154)         scn: 0x0000.001cbe52 seq: 0x01 flg: 0x02 tail: 0xbe520601         frmt: 0x02 chkval: 0x0000 type: 0x06=trans data tab 0, row 0, @0x1f9a tl: 6 fb: --H-FL-- lb: 0x0  cc: 1 col  0: [ 2]  c1 02 tab 0, row 1, @0x1f94 tl: 6 fb: --H-FL-- lb: 0x2  cc: 1 col  0: [ 2]  c1 15 end_of_block_dump         (buffer) (CR) PR: 0x11b305810 FLG: 0x108000 st: XCURRENT md: SHR tch: 12 ? buffer header?status= XCURRENT mode=KCBMSHARE KCBMSHR     current share ?????  x$kcbbf ????? cache buffer handle SQL> select distinct KCBBPBH from  x$kcbbf ; KCBBPBH ---------------- 00 00000000F2FC69F8            ==>0xf2fc69f8 SQL> select * from x$kcbbf where kcbbpbh='00000000F2FC69F8'; ADDR                   INDX    INST_ID KCBBFSO_TYP KCBBFSO_FLG KCBBFSO_OWN ---------------- ---------- ---------- ----------- ----------- ----------------   KCBBFFLG    KCBBFCR    KCBBFCM KCBBFMBR         KCBBPBH ---------- ---------- ---------- ---------------- ---------------- KCBBPBF          X0KCBBPBH        X0KCBBPBF        X1KCBBPBH ---------------- ---------------- ---------------- ---------------- X1KCBBPBF        KCBBFBH            KCBBFWHR   KCBBFWHY ---------------- ---------------- ---------- ---------- 00000001182DC750        748          1          24           1 000000011B452348    1081344          1          0 00               00000000F2FC69F8 00000001182DC750 00               00000001182DC750 00 00000001182DC7F8 00                      583          0 SQL> desc x$kcbbf;  Name                                      Null?    Type  ----------------------------------------- -------- ----------------------------  ADDR                                               RAW(8)  INDX                                               NUMBER  INST_ID                                            NUMBER  KCBBFSO_TYP                                        NUMBER  KCBBFSO_FLG                                        NUMBER  KCBBFSO_OWN                                        RAW(8)  KCBBFFLG                                           NUMBER  KCBBFCR                                            NUMBER  KCBBFCM                                            NUMBER  KCBBFMBR                                           RAW(8)  KCBBPBH                                            RAW(8)  KCBBPBF                                            RAW(8)  X0KCBBPBH                                          RAW(8)  X0KCBBPBF                                          RAW(8)  X1KCBBPBH                                          RAW(8)  X1KCBBPBF                                          RAW(8)  KCBBFBH                                            RAW(8)  KCBBFWHR                                           NUMBER  KCBBFWHY                                           NUMBER gdb ?? ?process??????kcbrls release buffer? ???cache buffer handle??? SQL> select distinct KCBBPBH from  x$kcbbf ; KCBBPBH ---------------- 00

    Read the article

  • Building a better mouse-trap &ndash; Improving the creation of XML Message Requests using Reflection, XML &amp; XSLT

    - by paulschapman
    Introduction The way I previously created messages to send to the GovTalk service I used the XMLDocument to create the request. While this worked it left a number of problems; not least that for every message a special function would need to created. This is OK for the short term but the biggest cost in any software project is maintenance and this would be a headache to maintain. So the following is a somewhat better way of achieving the same thing. For the purposes of this article I am going to be using the CompanyNumberSearch request of the GovTalk service – although this technique would work for any service that accepted XML. The C# functions which send and receive the messages remain the same. The magic sauce in this is the XSLT which defines the structure of the request, and the use of objects in conjunction with reflection to provide the content. It is a bit like Sweet Chilli Sauce added to Chicken on a bed of rice. So on to the Sweet Chilli Sauce The Sweet Chilli Sauce The request to search for a company based on it’s number is as follows; <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID>1</TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID>????????????????????????????????</SenderID> <Authentication> <Method>CHMD5</Method> <Value>????????????????????????????????</Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber>99999999</PartialCompanyNumber> <DataSet>LIVE</DataSet> <SearchRows>1</SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> This is the XML that we send to the GovTalk Service and we get back a list of companies that match the criteria passed A message is structured in two parts; The envelope which identifies the person sending the request, with the name of the request, and the body which gives the detail of the company we are looking for. The Chilli What makes it possible is the use of XSLT to define the message – and serialization to convert each request object into XML. To start we need to create an object which will represent the contents of the message we are sending. However there is a common properties in all the messages that we send to Companies House. These properties are as follows SenderId – the id of the person sending the message SenderPassword – the password associated with Id TransactionId – Unique identifier for the message AuthenticationValue – authenticates the request Because these properties are unique to the Companies House message, and because they are shared with all messages they are perfect candidates for a base class. The class is as follows; using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Security.Cryptography; using System.Text; using System.Text.RegularExpressions; using Microsoft.WindowsAzure.ServiceRuntime; namespace CompanyHub.Services { public class GovTalkRequest { public GovTalkRequest() { try { SenderID = RoleEnvironment.GetConfigurationSettingValue("SenderId"); SenderPassword = RoleEnvironment.GetConfigurationSettingValue("SenderPassword"); TransactionId = DateTime.Now.Ticks.ToString(); AuthenticationValue = EncodePassword(String.Format("{0}{1}{2}", SenderID, SenderPassword, TransactionId)); } catch (System.Exception ex) { throw ex; } } /// <summary> /// returns the Sender ID to be used when communicating with the GovTalk Service /// </summary> public String SenderID { get; set; } /// <summary> /// return the password to be used when communicating with the GovTalk Service /// </summary> public String SenderPassword { get; set; } // end SenderPassword /// <summary> /// Transaction Id - uses the Time and Date converted to Ticks /// </summary> public String TransactionId { get; set; } // end TransactionId /// <summary> /// calculate the authentication value that will be used when /// communicating with /// </summary> public String AuthenticationValue { get; set; } // end AuthenticationValue property /// <summary> /// encodes password(s) using MD5 /// </summary> /// <param name="clearPassword"></param> /// <returns></returns> public static String EncodePassword(String clearPassword) { MD5CryptoServiceProvider md5Hasher = new MD5CryptoServiceProvider(); byte[] hashedBytes; UTF32Encoding encoder = new UTF32Encoding(); hashedBytes = md5Hasher.ComputeHash(ASCIIEncoding.Default.GetBytes(clearPassword)); String result = Regex.Replace(BitConverter.ToString(hashedBytes), "-", "").ToLower(); return result; } } } There is nothing particularly clever here, except for the EncodePassword method which hashes the value made up of the SenderId, Password and Transaction id. Each message inherits from this object. So for the Company Number Search in addition to the properties above we need a partial number, which dataset to search – for the purposes of the project we only need to search the LIVE set so this can be set in the constructor and the SearchRows. Again all are set as properties. With the SearchRows and DataSet initialized in the constructor. public class CompanyNumberSearchRequest : GovTalkRequest, IDisposable { /// <summary> /// /// </summary> public CompanyNumberSearchRequest() : base() { DataSet = "LIVE"; SearchRows = 1; } /// <summary> /// Company Number to search against /// </summary> public String PartialCompanyNumber { get; set; } /// <summary> /// What DataSet should be searched for the company /// </summary> public String DataSet { get; set; } /// <summary> /// How many rows should be returned /// </summary> public int SearchRows { get; set; } public void Dispose() { DataSet = String.Empty; PartialCompanyNumber = String.Empty; DataSet = "LIVE"; SearchRows = 1; } } As well as inheriting from our base class, I have also inherited from IDisposable – not just because it is just plain good practice to dispose of objects when coding, but it gives also gives us more versatility when using the object. There are four stages in making a request and this is reflected in the four methods we execute in making a call to the Companies House service; Create a request Send a request Check the status If OK then get the results of the request I’ve implemented each of these stages within a static class called Toolbox – which also means I don’t need to create an instance of the class to use it. When making a request there are three stages; Get the template for the message Serialize the object representing the message Transform the serialized object using a predefined XSLT file. Each of my templates I have defined as an embedded resource. When retrieving a resource of this kind we have to include the full namespace to the resource. In making the code re-usable as much as possible I defined the full ‘path’ within the GetRequest method. requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); So we now have the full path of the file within the assembly. Now all we need do is retrieve the assembly and get the resource. asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); Once retrieved  So this can be returned to the calling function and we now have a stream of XSLT to define the message. Time now to serialize the request to create the other side of this message. // Serialize object containing Request, Load into XML Document t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); First off we need the type of the object so we make a call to the GetType method of the object containing the Message properties. Next we need a MemoryStream, XmlSerializer and an XMLTextWriter so these can be initialized. The object is serialized by making the call to the Serialize method of the serializer object. The result of that is then converted into a MemoryStream. That MemoryStream is then converted into a string. ConvertByteArrayToString This is a fairly simple function which uses an ASCIIEncoding object found within the System.Text namespace to convert an array of bytes into a string. public static String ConvertByteArrayToString(byte[] bytes) { System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); return enc.GetString(bytes); } I only put it into a function because I will be using this in various places. The Sauce When adding support for other messages outside of creating a new object to store the properties of the message, the C# components do not need to change. It is in the XSLT file that the versatility of the technique lies. The XSLT file determines the format of the message. For the CompanyNumberSearch the XSLT file is as follows; <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID> <xsl:value-of select="CompanyNumberSearchRequest/TransactionId"/> </TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID><xsl:value-of select="CompanyNumberSearchRequest/SenderID"/></SenderID> <Authentication> <Method>CHMD5</Method> <Value> <xsl:value-of select="CompanyNumberSearchRequest/AuthenticationValue"/> </Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber> <xsl:value-of select="CompanyNumberSearchRequest/PartialCompanyNumber"/> </PartialCompanyNumber> <DataSet> <xsl:value-of select="CompanyNumberSearchRequest/DataSet"/> </DataSet> <SearchRows> <xsl:value-of select="CompanyNumberSearchRequest/SearchRows"/> </SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> </xsl:template> </xsl:stylesheet> The outer two tags define that this is a XSLT stylesheet and the root tag from which the nodes are searched for. The GovTalkMessage is the format of the message that will be sent to Companies House. We first set up the XslCompiledTransform object which will transform the XSLT template and the serialized object into the request to Companies House. xslt = new XslCompiledTransform(); resultStream = new MemoryStream(); writer = new XmlTextWriter(resultStream, Encoding.ASCII); doc = new XmlDocument(); The Serialize method require XmlTextWriter to write the XML (writer) and a stream to place the transferred object into (writer). The XML will be loaded into an XMLDocument object (doc) prior to the transformation. // create XSLT Template xslTemplate = Toolbox.GetRequest(Template); xslTemplate.Seek(0, SeekOrigin.Begin); templateReader = XmlReader.Create(xslTemplate); xslt.Load(templateReader); I have stored all the templates as a series of Embedded Resources and the GetRequestCall takes the name of the template and extracts the relevent XSLT file. /// <summary> /// Gets the framwork XML which makes the request /// </summary> /// <param name="RequestFile"></param> /// <returns></returns> public static Stream GetRequest(String RequestFile) { String requestFile = String.Empty; Stream sr = null; Assembly asm = null; try { requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); } catch (Exception) { throw; } finally { asm = null; } return sr; } // end private static stream GetRequest We first take the template name and expand it to include the full namespace to the Embedded Resource I like to keep all my schemas in the same directory and so the namespace reflects this. The rest is the default namespace for the project. Then we get the currently executing assembly (which will contain the resources with the call to GetExecutingAssembly() ) Finally we get a stream which contains the XSLT file. We use this stream and then load an XmlReader with the contents of the template, and that is in turn loaded into the XslCompiledTransform object. We convert the object containing the message properties into Xml by serializing it; calling the Serialize() method of the XmlSerializer object. To set up the object we do the following; t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); We first determine the type of the object being transferred by calling GetType() We create an XmlSerializer object by passing the type of the object being serialized. The serializer writes to a memory stream and that is linked to an XmlTextWriter. Next job is to serialize the object and load it into an XmlDocument. serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; xmlRequest = new XmlTextReader(ms); GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); doc.LoadXml(GovTalkRequest); Time to transform the XML to construct the full request. xslt.Transform(doc, writer); resultStream.Seek(0, SeekOrigin.Begin); request = Toolbox.ConvertByteArrayToString(resultStream.ToArray()); So that creates the full request to be sent  to Companies House. Sending the request So far we have a string with a request for the Companies House service. Now we need to send the request to the Companies House Service. Configuration within an Azure project There are entire blog entries written about configuration within an Azure project – most of this is out of scope for this article but the following is a summary. Configuration is defined in two files within the parent project *.csdef which contains the definition of configuration setting. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRole name="CompanyHub.Host"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="80" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="DataConnectionString" /> </ConfigurationSettings> </WebRole> <WebRole name="CompanyHub.Services"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="8080" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="SenderId"/> <Setting name="SenderPassword" /> <Setting name="GovTalkUrl"/> </ConfigurationSettings> </WebRole> <WorkerRole name="CompanyHub.Worker"> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> </ConfigurationSettings> </WorkerRole> </ServiceDefinition>   Above is the configuration definition from the project. What we are interested in however is the ConfigurationSettings tag of the CompanyHub.Services WebRole. There are four configuration settings here, but at the moment we are interested in the second to forth settings; SenderId, SenderPassword and GovTalkUrl The value of these settings are defined in the ServiceDefinition.cscfg file; <?xml version="1.0"?> <ServiceConfiguration serviceName="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="CompanyHub.Host"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> <Role name="CompanyHub.Services"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="SenderId" value="UserID"/> <Setting name="SenderPassword" value="Password"/> <Setting name="GovTalkUrl" value="http://xmlgw.companieshouse.gov.uk/v1-0/xmlgw/Gateway"/> </ConfigurationSettings> </Role> <Role name="CompanyHub.Worker"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> </ServiceConfiguration>   Look for the Role tag that contains our project name (CompanyHub.Services). Having configured the parameters we can now transmit the request. This is done by ‘POST’ing a stream of XML to the Companies House servers. govTalkUrl = RoleEnvironment.GetConfigurationSettingValue("GovTalkUrl"); request = WebRequest.Create(govTalkUrl); request.Method = "POST"; request.ContentType = "text/xml"; writer = new StreamWriter(request.GetRequestStream()); writer.WriteLine(RequestMessage); writer.Close(); We use the WebRequest object to send the object. Set the method of sending to ‘POST’ and the type of data as text/xml. Once set up all we do is write the request to the writer – this sends the request to Companies House. Did the Request Work Part I – Getting the response Having sent a request – we now need the result of that request. response = request.GetResponse(); reader = response.GetResponseStream(); result = Toolbox.ConvertByteArrayToString(Toolbox.ReadFully(reader));   The WebRequest object has a GetResponse() method which allows us to get the response sent back. Like many of these calls the results come in the form of a stream which we convert into a string. Did the Request Work Part II – Translating the Response Much like XSLT and XML were used to create the original request, so it can be used to extract the response and by deserializing the result we create an object that contains the response. Did it work? It would be really great if everything worked all the time. Of course if it did then I don’t suppose people would pay me and others the big bucks so that our programmes do not a) Collapse in a heap (this is an area of memory) b) Blow every fuse in the place in a shower of sparks (this will probably not happen this being real life and not a Hollywood movie, but it was possible to blow the sound system of a BBC Model B with a poorly coded setting) c) Go nuts and trap everyone outside the airlock (this was from a movie, and unless NASA get a manned moon/mars mission set up unlikely to happen) d) Go nuts and take over the world (this was also from a movie, but please note life has a habit of being of exceeding the wildest imaginations of Hollywood writers (note writers – Hollywood executives have no imagination and judging by recent output of that town have turned plagiarism into an art form). e) Freeze in total confusion because the cleaner pulled the plug to the internet router (this has happened) So anyway – we need to check to see if our request actually worked. Within the GovTalk response there is a section that details the status of the message and a description of what went wrong (if anything did). I have defined an XSLT template which will extract these into an XML document. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <xsl:template match="/"> <GovTalkStatus xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Status> <xsl:value-of select="ev:GovTalkMessage/ev:Header/ev:MessageDetails/ev:Qualifier"/> </Status> <Text> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Text"/> </Text> <Location> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Location"/> </Location> <Number> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Number"/> </Number> <Type> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Type"/> </Type> </GovTalkStatus> </xsl:template> </xsl:stylesheet>   Only thing different about previous XSL files is the references to two namespaces ev & gt. These are defined in the GovTalk response at the top of the response; xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" If we do not put these references into the XSLT template then  the XslCompiledTransform object will not be able to find the relevant tags. Deserialization is a fairly simple activity. encoder = new ASCIIEncoding(); ms = new MemoryStream(encoder.GetBytes(statusXML)); serializer = new XmlSerializer(typeof(GovTalkStatus)); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); messageStatus = (GovTalkStatus)serializer.Deserialize(ms);   We set up a serialization object using the object type containing the error state and pass to it the results of a transformation between the XSLT above and the GovTalk response. Now we have an object containing any error state, and the error message. All we need to do is check the status. If there is an error then we can flag an error. If not then  we extract the results and pass that as an object back to the calling function. We go this by guess what – defining an XSLT template for the result and using that to create an Xml Stream which can be deserialized into a .Net object. In this instance the XSLT to create the result of a Company Number Search is; <?xml version="1.0" encoding="us-ascii"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:sch="http://xmlgw.companieshouse.gov.uk/v1-0/schema" exclude-result-prefixes="ev"> <xsl:template match="/"> <CompanySearchResult xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <CompanyNumber> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyNumber"/> </CompanyNumber> <CompanyName> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyName"/> </CompanyName> </CompanySearchResult> </xsl:template> </xsl:stylesheet> and the object definition is; using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace CompanyHub.Services { public class CompanySearchResult { public CompanySearchResult() { CompanyNumber = String.Empty; CompanyName = String.Empty; } public String CompanyNumber { get; set; } public String CompanyName { get; set; } } } Our entire code to make calls to send a request, and interpret the results are; String request = String.Empty; String response = String.Empty; GovTalkStatus status = null; fault = null; try { using (CompanyNumberSearchRequest requestObj = new CompanyNumberSearchRequest()) { requestObj.PartialCompanyNumber = CompanyNumber; request = Toolbox.CreateRequest(requestObj, "CompanyNumberSearch.xsl"); response = Toolbox.SendGovTalkRequest(request); status = Toolbox.GetMessageStatus(response); if (status.Status.ToLower() == "error") { fault = new HubFault() { Message = status.Text }; } else { Object obj = Toolbox.GetGovTalkResponse(response, "CompanyNumberSearchResult.xsl", typeof(CompanySearchResult)); } } } catch (FaultException<ArgumentException> ex) { fault = new HubFault() { FaultType = ex.Detail.GetType().FullName, Message = ex.Detail.Message }; } catch (System.Exception ex) { fault = new HubFault() { FaultType = ex.GetType().FullName, Message = ex.Message }; } finally { } Wrap up So there we have it – a reusable set of functions to send and interpret XML results from an internet based service. The code is reusable with a little change with any service which uses XML as a transport mechanism – and as for the Companies House GovTalk service all I need to do is create various objects for the result and message sent and the relevent XSLT files. I might need minor changes for other services but something like 70-90% will be exactly the same.

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • Seems doctrine listener is not fired

    - by Roel Veldhuizen
    Got a service which should be executed the moment an object is persisted. Though, I think the code looks like it should work, it doesn't. I configured the service like the following yml. services: bla_orm.listener: class: Bla\OrmBundle\EventListener\UserManager arguments: [@security.encoder_factory] tags: - { name: doctrine.event_listener, event: prePersist } The class: namespace Bla\OrmBundle\EventListener; use Doctrine\ORM\Event\LifecycleEventArgs; use Bla\OrmBundle\Entity\User; class UserManager { protected $encoderFactory; public function __construct(\Symfony\Component\Security\Core\Encoder\EncoderFactoryInterface $encoderFactory) { $this->encoderFactory = $encoderFactory; } public function prePersist(LifecycleEventArgs $args) { $entity = $args->getEntity(); if ($entity instanceof User) { $encoder = $this->encoderFactory ->getEncoder($entity); $entity->setSalt(rand(10000, 99999)); $password = $encoder->encodePassword($entity->getPassword(), $entity->getSalt()); $entity->setPassword($password); } } } Symfony version: Symfony version 2.3.3 - app/dev/debug Output of container:debug [container] Public services Service Id Scope Class Name annotation_reader container Doctrine\Common\Annotations\FileCacheReader assetic.asset_manager container Assetic\Factory\LazyAssetManager assetic.controller prototype Symfony\Bundle\AsseticBundle\Controller\AsseticController assetic.filter.cssrewrite container Assetic\Filter\CssRewriteFilter assetic.filter_manager container Symfony\Bundle\AsseticBundle\FilterManager assetic.request_listener container Symfony\Bundle\AsseticBundle\EventListener\RequestListener cache_clearer container Symfony\Component\HttpKernel\CacheClearer\ChainCacheClearer cache_warmer container Symfony\Component\HttpKernel\CacheWarmer\CacheWarmerAggregate data_collector.request container Symfony\Component\HttpKernel\DataCollector\RequestDataCollector data_collector.router container Symfony\Bundle\FrameworkBundle\DataCollector\RouterDataCollector database_connection n/a alias for doctrine.dbal.default_connection debug.controller_resolver container Symfony\Component\HttpKernel\Controller\TraceableControllerResolver debug.deprecation_logger_listener container Symfony\Component\HttpKernel\EventListener\ErrorsLoggerListener debug.emergency_logger_listener container Symfony\Component\HttpKernel\EventListener\ErrorsLoggerListener debug.event_dispatcher container Symfony\Component\HttpKernel\Debug\TraceableEventDispatcher debug.stopwatch container Symfony\Component\Stopwatch\Stopwatch debug.templating.engine.php container Symfony\Bundle\FrameworkBundle\Templating\TimedPhpEngine debug.templating.engine.twig n/a alias for templating doctrine container Doctrine\Bundle\DoctrineBundle\Registry doctrine.dbal.connection_factory container Doctrine\Bundle\DoctrineBundle\ConnectionFactory doctrine.dbal.default_connection container stdClass doctrine.orm.default_entity_manager container Doctrine\ORM\EntityManager doctrine.orm.default_manager_configurator container Doctrine\Bundle\DoctrineBundle\ManagerConfigurator doctrine.orm.entity_manager n/a alias for doctrine.orm.default_entity_manager doctrine.orm.validator.unique container Symfony\Bridge\Doctrine\Validator\Constraints\UniqueEntityValidator doctrine.orm.validator_initializer container Symfony\Bridge\Doctrine\Validator\DoctrineInitializer event_dispatcher container Symfony\Component\EventDispatcher\ContainerAwareEventDispatcher file_locator container Symfony\Component\HttpKernel\Config\FileLocator filesystem container Symfony\Component\Filesystem\Filesystem form.csrf_provider container Symfony\Component\Form\Extension\Csrf\CsrfProvider\SessionCsrfProvider form.factory container Symfony\Component\Form\FormFactory form.registry container Symfony\Component\Form\FormRegistry form.resolved_type_factory container Symfony\Component\Form\ResolvedFormTypeFactory form.type.birthday container Symfony\Component\Form\Extension\Core\Type\BirthdayType form.type.button container Symfony\Component\Form\Extension\Core\Type\ButtonType form.type.checkbox container Symfony\Component\Form\Extension\Core\Type\CheckboxType form.type.choice container Symfony\Component\Form\Extension\Core\Type\ChoiceType form.type.collection container Symfony\Component\Form\Extension\Core\Type\CollectionType form.type.country container Symfony\Component\Form\Extension\Core\Type\CountryType form.type.currency container Symfony\Component\Form\Extension\Core\Type\CurrencyType form.type.date container Symfony\Component\Form\Extension\Core\Type\DateType form.type.datetime container Symfony\Component\Form\Extension\Core\Type\DateTimeType form.type.email container Symfony\Component\Form\Extension\Core\Type\EmailType form.type.entity container Symfony\Bridge\Doctrine\Form\Type\EntityType form.type.file container Symfony\Component\Form\Extension\Core\Type\FileType form.type.form container Symfony\Component\Form\Extension\Core\Type\FormType form.type.hidden container Symfony\Component\Form\Extension\Core\Type\HiddenType form.type.integer container Symfony\Component\Form\Extension\Core\Type\IntegerType form.type.language container Symfony\Component\Form\Extension\Core\Type\LanguageType form.type.locale container Symfony\Component\Form\Extension\Core\Type\LocaleType form.type.money container Symfony\Component\Form\Extension\Core\Type\MoneyType form.type.number container Symfony\Component\Form\Extension\Core\Type\NumberType form.type.password container Symfony\Component\Form\Extension\Core\Type\PasswordType form.type.percent container Symfony\Component\Form\Extension\Core\Type\PercentType form.type.radio container Symfony\Component\Form\Extension\Core\Type\RadioType form.type.repeated container Symfony\Component\Form\Extension\Core\Type\RepeatedType form.type.reset container Symfony\Component\Form\Extension\Core\Type\ResetType form.type.search container Symfony\Component\Form\Extension\Core\Type\SearchType form.type.submit container Symfony\Component\Form\Extension\Core\Type\SubmitType form.type.text container Symfony\Component\Form\Extension\Core\Type\TextType form.type.textarea container Symfony\Component\Form\Extension\Core\Type\TextareaType form.type.time container Symfony\Component\Form\Extension\Core\Type\TimeType form.type.timezone container Symfony\Component\Form\Extension\Core\Type\TimezoneType form.type.url container Symfony\Component\Form\Extension\Core\Type\UrlType form.type_extension.csrf container Symfony\Component\Form\Extension\Csrf\Type\FormTypeCsrfExtension form.type_extension.form.http_foundation container Symfony\Component\Form\Extension\HttpFoundation\Type\FormTypeHttpFoundationExtension form.type_extension.form.validator container Symfony\Component\Form\Extension\Validator\Type\FormTypeValidatorExtension form.type_extension.repeated.validator container Symfony\Component\Form\Extension\Validator\Type\RepeatedTypeValidatorExtension form.type_extension.submit.validator container Symfony\Component\Form\Extension\Validator\Type\SubmitTypeValidatorExtension form.type_guesser.doctrine container Symfony\Bridge\Doctrine\Form\DoctrineOrmTypeGuesser form.type_guesser.validator container Symfony\Component\Form\Extension\Validator\ValidatorTypeGuesser fragment.handler container Symfony\Component\HttpKernel\Fragment\FragmentHandler fragment.listener container Symfony\Component\HttpKernel\EventListener\FragmentListener fragment.renderer.hinclude container Symfony\Bundle\FrameworkBundle\Fragment\ContainerAwareHIncludeFragmentRenderer fragment.renderer.inline container Symfony\Component\HttpKernel\Fragment\InlineFragmentRenderer http_kernel container Symfony\Component\HttpKernel\DependencyInjection\ContainerAwareHttpKernel kernel container locale_listener container Symfony\Component\HttpKernel\EventListener\LocaleListener logger container Symfony\Bridge\Monolog\Logger mailer n/a alias for swiftmailer.mailer.default monolog.handler.chromephp container Symfony\Bridge\Monolog\Handler\ChromePhpHandler monolog.handler.debug container Symfony\Bridge\Monolog\Handler\DebugHandler monolog.handler.firephp container Symfony\Bridge\Monolog\Handler\FirePHPHandler monolog.handler.main container Monolog\Handler\StreamHandler monolog.logger.deprecation container Symfony\Bridge\Monolog\Logger monolog.logger.doctrine container Symfony\Bridge\Monolog\Logger monolog.logger.emergency container Symfony\Bridge\Monolog\Logger monolog.logger.event container Symfony\Bridge\Monolog\Logger monolog.logger.profiler container Symfony\Bridge\Monolog\Logger monolog.logger.request container Symfony\Bridge\Monolog\Logger monolog.logger.router container Symfony\Bridge\Monolog\Logger monolog.logger.security container Symfony\Bridge\Monolog\Logger monolog.logger.templating container Symfony\Bridge\Monolog\Logger profiler container Symfony\Component\HttpKernel\Profiler\Profiler profiler_listener container Symfony\Component\HttpKernel\EventListener\ProfilerListener property_accessor container Symfony\Component\PropertyAccess\PropertyAccessor request request response_listener container Symfony\Component\HttpKernel\EventListener\ResponseListener router container Symfony\Bundle\FrameworkBundle\Routing\Router router_listener container Symfony\Component\HttpKernel\EventListener\RouterListener routing.loader container Symfony\Bundle\FrameworkBundle\Routing\DelegatingLoader security.context container Symfony\Component\Security\Core\SecurityContext security.encoder_factory container Symfony\Component\Security\Core\Encoder\EncoderFactory security.firewall container Symfony\Component\Security\Http\Firewall security.firewall.map.context.dev container Symfony\Bundle\SecurityBundle\Security\FirewallContext security.firewall.map.context.login container Symfony\Bundle\SecurityBundle\Security\FirewallContext security.firewall.map.context.rest container Symfony\Bundle\SecurityBundle\Security\FirewallContext security.firewall.map.context.secured_area container Symfony\Bundle\SecurityBundle\Security\FirewallContext security.rememberme.response_listener container Symfony\Component\Security\Http\RememberMe\ResponseListener security.secure_random container Symfony\Component\Security\Core\Util\SecureRandom security.validator.user_password container Symfony\Component\Security\Core\Validator\Constraints\UserPasswordValidator sensio.distribution.webconfigurator n/a alias for sensio_distribution.webconfigurator sensio_distribution.webconfigurator container Sensio\Bundle\DistributionBundle\Configurator\Configurator sensio_framework_extra.cache.listener container Sensio\Bundle\FrameworkExtraBundle\EventListener\CacheListener sensio_framework_extra.controller.listener container Sensio\Bundle\FrameworkExtraBundle\EventListener\ControllerListener sensio_framework_extra.converter.datetime container Sensio\Bundle\FrameworkExtraBundle\Request\ParamConverter\DateTimeParamConverter sensio_framework_extra.converter.doctrine.orm container Sensio\Bundle\FrameworkExtraBundle\Request\ParamConverter\DoctrineParamConverter sensio_framework_extra.converter.listener container Sensio\Bundle\FrameworkExtraBundle\EventListener\ParamConverterListener sensio_framework_extra.converter.manager container Sensio\Bundle\FrameworkExtraBundle\Request\ParamConverter\ParamConverterManager sensio_framework_extra.view.guesser container Sensio\Bundle\FrameworkExtraBundle\Templating\TemplateGuesser sensio_framework_extra.view.listener container Sensio\Bundle\FrameworkExtraBundle\EventListener\TemplateListener service_container container session container Symfony\Component\HttpFoundation\Session\Session session.handler container Symfony\Component\HttpFoundation\Session\Storage\Handler\NativeFileSessionHandler session.storage n/a alias for session.storage.native session.storage.filesystem container Symfony\Component\HttpFoundation\Session\Storage\MockFileSessionStorage session.storage.native container Symfony\Component\HttpFoundation\Session\Storage\NativeSessionStorage session.storage.php_bridge container Symfony\Component\HttpFoundation\Session\Storage\PhpBridgeSessionStorage session_listener container Symfony\Bundle\FrameworkBundle\EventListener\SessionListener streamed_response_listener container Symfony\Component\HttpKernel\EventListener\StreamedResponseListener swiftmailer.email_sender.listener container Symfony\Bundle\SwiftmailerBundle\EventListener\EmailSenderListener swiftmailer.mailer n/a alias for swiftmailer.mailer.default swiftmailer.mailer.default container Swift_Mailer swiftmailer.mailer.default.plugin.messagelogger container Swift_Plugins_MessageLogger swiftmailer.mailer.default.spool container Swift_FileSpool swiftmailer.mailer.default.transport container Swift_Transport_SpoolTransport swiftmailer.mailer.default.transport.real container Swift_Transport_EsmtpTransport swiftmailer.plugin.messagelogger n/a alias for swiftmailer.mailer.default.plugin.messagelogger swiftmailer.spool n/a alias for swiftmailer.mailer.default.spool swiftmailer.transport n/a alias for swiftmailer.mailer.default.transport swiftmailer.transport.real n/a alias for swiftmailer.mailer.default.transport.real templating container Symfony\Bundle\TwigBundle\Debug\TimedTwigEngine templating.asset.package_factory container Symfony\Bundle\FrameworkBundle\Templating\Asset\PackageFactory templating.filename_parser container Symfony\Bundle\FrameworkBundle\Templating\TemplateFilenameParser templating.globals container Symfony\Bundle\FrameworkBundle\Templating\GlobalVariables templating.helper.actions container Symfony\Bundle\FrameworkBundle\Templating\Helper\ActionsHelper templating.helper.assets request Symfony\Component\Templating\Helper\CoreAssetsHelper templating.helper.code container Symfony\Bundle\FrameworkBundle\Templating\Helper\CodeHelper templating.helper.form container Symfony\Bundle\FrameworkBundle\Templating\Helper\FormHelper templating.helper.logout_url container Symfony\Bundle\SecurityBundle\Templating\Helper\LogoutUrlHelper templating.helper.request container Symfony\Bundle\FrameworkBundle\Templating\Helper\RequestHelper templating.helper.router container Symfony\Bundle\FrameworkBundle\Templating\Helper\RouterHelper templating.helper.security container Symfony\Bundle\SecurityBundle\Templating\Helper\SecurityHelper templating.helper.session container Symfony\Bundle\FrameworkBundle\Templating\Helper\SessionHelper templating.helper.slots container Symfony\Component\Templating\Helper\SlotsHelper templating.helper.translator container Symfony\Bundle\FrameworkBundle\Templating\Helper\TranslatorHelper templating.loader container Symfony\Bundle\FrameworkBundle\Templating\Loader\FilesystemLoader templating.name_parser container Symfony\Bundle\FrameworkBundle\Templating\TemplateNameParser translation.dumper.csv container Symfony\Component\Translation\Dumper\CsvFileDumper translation.dumper.ini container Symfony\Component\Translation\Dumper\IniFileDumper translation.dumper.mo container Symfony\Component\Translation\Dumper\MoFileDumper translation.dumper.php container Symfony\Component\Translation\Dumper\PhpFileDumper translation.dumper.po container Symfony\Component\Translation\Dumper\PoFileDumper translation.dumper.qt container Symfony\Component\Translation\Dumper\QtFileDumper translation.dumper.res container Symfony\Component\Translation\Dumper\IcuResFileDumper translation.dumper.xliff container Symfony\Component\Translation\Dumper\XliffFileDumper translation.dumper.yml container Symfony\Component\Translation\Dumper\YamlFileDumper translation.extractor container Symfony\Component\Translation\Extractor\ChainExtractor translation.extractor.php container Symfony\Bundle\FrameworkBundle\Translation\PhpExtractor translation.loader container Symfony\Bundle\FrameworkBundle\Translation\TranslationLoader translation.loader.csv container Symfony\Component\Translation\Loader\CsvFileLoader translation.loader.dat container Symfony\Component\Translation\Loader\IcuResFileLoader translation.loader.ini container Symfony\Component\Translation\Loader\IniFileLoader translation.loader.mo container Symfony\Component\Translation\Loader\MoFileLoader translation.loader.php container Symfony\Component\Translation\Loader\PhpFileLoader translation.loader.po container Symfony\Component\Translation\Loader\PoFileLoader translation.loader.qt container Symfony\Component\Translation\Loader\QtFileLoader translation.loader.res container Symfony\Component\Translation\Loader\IcuResFileLoader translation.loader.xliff container Symfony\Component\Translation\Loader\XliffFileLoader translation.loader.yml container Symfony\Component\Translation\Loader\YamlFileLoader translation.writer container Symfony\Component\Translation\Writer\TranslationWriter translator n/a alias for translator.default translator.default container Symfony\Bundle\FrameworkBundle\Translation\Translator twig container Twig_Environment twig.controller.exception container Symfony\Bundle\TwigBundle\Controller\ExceptionController twig.exception_listener container Symfony\Component\HttpKernel\EventListener\ExceptionListener twig.loader container Symfony\Bundle\TwigBundle\Loader\FilesystemLoader twig.translation.extractor container Symfony\Bridge\Twig\Translation\TwigExtractor uri_signer container Symfony\Component\HttpKernel\UriSigner bla_orm.listener container Bla\OrmBundle\EventListener\UserManager validator container Symfony\Component\Validator\Validator web_profiler.controller.exception container Symfony\Bundle\WebProfilerBundle\Controller\ExceptionController web_profiler.controller.profiler container Symfony\Bundle\WebProfilerBundle\Controller\ProfilerController web_profiler.controller.router container Symfony\Bundle\WebProfilerBundle\Controller\RouterController web_profiler.debug_toolbar container Symfony\Bundle\WebProfilerBundle\EventListener\WebDebugToolbarListener Update It seems that the listener is not invoked when an updateAction, generated by generate:doctrine:crud has taken place though. At another part of the code the lister seems to be invoked. Though, there are both Controller types and both us $em->persist($something); $em->flush(); to save the changes. I would expect that in both cases the listener is invoked.

    Read the article

  • eventmachine on debian fails install via rubygems

    - by Max
    this has been killing me for the last 5 hours. I don't seem to be able to get eventmachine running on my debian box. here this output: $ gem install thin Building native extensions. This could take a while... ERROR: Error installing thin: ERROR: Failed to build gem native extension. /home/eventhub/.rvm/rubies/ruby-1.9.3-p125/bin/ruby extconf.rb checking for rb_trap_immediate in ruby.h,rubysig.h... no checking for rb_thread_blocking_region()... yes checking for inotify_init() in sys/inotify.h... yes checking for writev() in sys/uio.h... yes checking for rb_wait_for_single_fd()... yes checking for rb_enable_interrupt()... yes checking for rb_time_new()... yes checking for sys/event.h... no checking for epoll_create() in sys/epoll.h... yes creating Makefile make compiling kb.cpp cc1plus: warning: command line option "-Wdeclaration-after-statement" is valid for C/ObjC but not for C++ cc1plus: warning: command line option "-Wimplicit-function-declaration" is valid for C/ObjC but not for C++ In file included from project.h:149, from kb.cpp:20: binder.h:35: warning: type qualifiers ignored on function return type In file included from project.h:150, from kb.cpp:20: em.h:84: warning: type qualifiers ignored on function return type em.h:85: warning: type qualifiers ignored on function return type em.h:86: warning: type qualifiers ignored on function return type em.h:88: warning: type qualifiers ignored on function return type em.h:89: warning: type qualifiers ignored on function return type em.h:90: warning: type qualifiers ignored on function return type em.h:91: warning: type qualifiers ignored on function return type em.h:93: warning: type qualifiers ignored on function return type em.h:99: warning: type qualifiers ignored on function return type em.h:116: warning: type qualifiers ignored on function return type em.h:125: warning: type qualifiers ignored on function return type In file included from project.h:154, from kb.cpp:20: eventmachine.h:46: warning: type qualifiers ignored on function return type eventmachine.h:47: warning: type qualifiers ignored on function return type eventmachine.h:48: warning: type qualifiers ignored on function return type eventmachine.h:50: warning: type qualifiers ignored on function return type eventmachine.h:65: warning: type qualifiers ignored on function return type eventmachine.h:66: warning: type qualifiers ignored on function return type eventmachine.h:67: warning: type qualifiers ignored on function return type eventmachine.h:68: warning: type qualifiers ignored on function return type In file included from project.h:154, from kb.cpp:20: eventmachine.h:103: warning: type qualifiers ignored on function return type eventmachine.h:105: warning: type qualifiers ignored on function return type eventmachine.h:108: warning: type qualifiers ignored on function return type compiling rubymain.cpp cc1plus: warning: command line option "-Wdeclaration-after-statement" is valid for C/ObjC but not for C++ cc1plus: warning: command line option "-Wimplicit-function-declaration" is valid for C/ObjC but not for C++ In file included from project.h:149, from rubymain.cpp:20: binder.h:35: warning: type qualifiers ignored on function return type In file included from project.h:150, from rubymain.cpp:20: em.h:84: warning: type qualifiers ignored on function return type em.h:85: warning: type qualifiers ignored on function return type em.h:86: warning: type qualifiers ignored on function return type em.h:88: warning: type qualifiers ignored on function return type em.h:89: warning: type qualifiers ignored on function return type em.h:90: warning: type qualifiers ignored on function return type em.h:91: warning: type qualifiers ignored on function return type em.h:93: warning: type qualifiers ignored on function return type em.h:99: warning: type qualifiers ignored on function return type em.h:116: warning: type qualifiers ignored on function return type em.h:125: warning: type qualifiers ignored on function return type In file included from project.h:154, from rubymain.cpp:20: eventmachine.h:46: warning: type qualifiers ignored on function return type eventmachine.h:47: warning: type qualifiers ignored on function return type eventmachine.h:48: warning: type qualifiers ignored on function return type eventmachine.h:50: warning: type qualifiers ignored on function return type eventmachine.h:65: warning: type qualifiers ignored on function return type eventmachine.h:66: warning: type qualifiers ignored on function return type eventmachine.h:67: warning: type qualifiers ignored on function return type eventmachine.h:68: warning: type qualifiers ignored on function return type In file included from project.h:154, from rubymain.cpp:20: eventmachine.h:103: warning: type qualifiers ignored on function return type eventmachine.h:105: warning: type qualifiers ignored on function return type eventmachine.h:108: warning: type qualifiers ignored on function return type compiling ssl.cpp cc1plus: warning: command line option "-Wdeclaration-after-statement" is valid for C/ObjC but not for C++ cc1plus: warning: command line option "-Wimplicit-function-declaration" is valid for C/ObjC but not for C++ In file included from project.h:149, from ssl.cpp:23: binder.h:35: warning: type qualifiers ignored on function return type In file included from project.h:150, from ssl.cpp:23: em.h:84: warning: type qualifiers ignored on function return type em.h:85: warning: type qualifiers ignored on function return type em.h:86: warning: type qualifiers ignored on function return type em.h:88: warning: type qualifiers ignored on function return type em.h:89: warning: type qualifiers ignored on function return type em.h:90: warning: type qualifiers ignored on function return type em.h:91: warning: type qualifiers ignored on function return type em.h:93: warning: type qualifiers ignored on function return type em.h:99: warning: type qualifiers ignored on function return type em.h:116: warning: type qualifiers ignored on function return type em.h:125: warning: type qualifiers ignored on function return type In file included from project.h:154, from ssl.cpp:23: eventmachine.h:46: warning: type qualifiers ignored on function return type eventmachine.h:47: warning: type qualifiers ignored on function return type eventmachine.h:48: warning: type qualifiers ignored on function return type eventmachine.h:50: warning: type qualifiers ignored on function return type eventmachine.h:65: warning: type qualifiers ignored on function return type eventmachine.h:66: warning: type qualifiers ignored on function return type eventmachine.h:67: warning: type qualifiers ignored on function return type eventmachine.h:68: warning: type qualifiers ignored on function return type In file included from project.h:154, from ssl.cpp:23: eventmachine.h:103: warning: type qualifiers ignored on function return type eventmachine.h:105: warning: type qualifiers ignored on function return type eventmachine.h:108: warning: type qualifiers ignored on function return type compiling cmain.cpp cc1plus: warning: command line option "-Wdeclaration-after-statement" is valid for C/ObjC but not for C++ cc1plus: warning: command line option "-Wimplicit-function-declaration" is valid for C/ObjC but not for C++ In file included from project.h:149, from cmain.cpp:20: binder.h:35: warning: type qualifiers ignored on function return type In file included from project.h:150, from cmain.cpp:20: em.h:84: warning: type qualifiers ignored on function return type em.h:85: warning: type qualifiers ignored on function return type em.h:86: warning: type qualifiers ignored on function return type em.h:88: warning: type qualifiers ignored on function return type em.h:89: warning: type qualifiers ignored on function return type em.h:90: warning: type qualifiers ignored on function return type em.h:91: warning: type qualifiers ignored on function return type em.h:93: warning: type qualifiers ignored on function return type em.h:99: warning: type qualifiers ignored on function return type em.h:116: warning: type qualifiers ignored on function return type em.h:125: warning: type qualifiers ignored on function return type In file included from project.h:154, from cmain.cpp:20: eventmachine.h:46: warning: type qualifiers ignored on function return type eventmachine.h:47: warning: type qualifiers ignored on function return type eventmachine.h:48: warning: type qualifiers ignored on function return type eventmachine.h:50: warning: type qualifiers ignored on function return type eventmachine.h:65: warning: type qualifiers ignored on function return type eventmachine.h:66: warning: type qualifiers ignored on function return type eventmachine.h:67: warning: type qualifiers ignored on function return type eventmachine.h:68: warning: type qualifiers ignored on function return type In file included from project.h:154, from cmain.cpp:20: eventmachine.h:103: warning: type qualifiers ignored on function return type eventmachine.h:105: warning: type qualifiers ignored on function return type eventmachine.h:108: warning: type qualifiers ignored on function return type cmain.cpp:96: warning: type qualifiers ignored on function return type cmain.cpp:107: warning: type qualifiers ignored on function return type cmain.cpp:117: warning: type qualifiers ignored on function return type cmain.cpp:127: warning: type qualifiers ignored on function return type cmain.cpp:269: warning: type qualifiers ignored on function return type cmain.cpp:279: warning: type qualifiers ignored on function return type cmain.cpp:289: warning: type qualifiers ignored on function return type cmain.cpp:299: warning: type qualifiers ignored on function return type cmain.cpp:309: warning: type qualifiers ignored on function return type cmain.cpp:329: warning: type qualifiers ignored on function return type cmain.cpp:678: warning: type qualifiers ignored on function return type compiling em.cpp cc1plus: warning: command line option "-Wdeclaration-after-statement" is valid for C/ObjC but not for C++ cc1plus: warning: command line option "-Wimplicit-function-declaration" is valid for C/ObjC but not for C++ In file included from project.h:149, from em.cpp:23: binder.h:35: warning: type qualifiers ignored on function return type In file included from project.h:150, from em.cpp:23: em.h:84: warning: type qualifiers ignored on function return type em.h:85: warning: type qualifiers ignored on function return type em.h:86: warning: type qualifiers ignored on function return type em.h:88: warning: type qualifiers ignored on function return type em.h:89: warning: type qualifiers ignored on function return type em.h:90: warning: type qualifiers ignored on function return type em.h:91: warning: type qualifiers ignored on function return type em.h:93: warning: type qualifiers ignored on function return type em.h:99: warning: type qualifiers ignored on function return type em.h:116: warning: type qualifiers ignored on function return type em.h:125: warning: type qualifiers ignored on function return type In file included from project.h:154, from em.cpp:23: eventmachine.h:46: warning: type qualifiers ignored on function return type eventmachine.h:47: warning: type qualifiers ignored on function return type eventmachine.h:48: warning: type qualifiers ignored on function return type eventmachine.h:50: warning: type qualifiers ignored on function return type eventmachine.h:65: warning: type qualifiers ignored on function return type eventmachine.h:66: warning: type qualifiers ignored on function return type eventmachine.h:67: warning: type qualifiers ignored on function return type eventmachine.h:68: warning: type qualifiers ignored on function return type In file included from project.h:154, from em.cpp:23: eventmachine.h:103: warning: type qualifiers ignored on function return type eventmachine.h:105: warning: type qualifiers ignored on function return type eventmachine.h:108: warning: type qualifiers ignored on function return type em.cpp: In member function 'bool EventMachine_t::_RunEpollOnce()': em.cpp:578: warning: 'int rb_thread_select(int, fd_set*, fd_set*, fd_set*, timeval*)' is deprecated (declared at /home/eventhub/.rvm/rubies/ruby-1.9.3-p125/include/ruby-1.9.1/ruby/intern.h:379) em.cpp:578: warning: 'int rb_thread_select(int, fd_set*, fd_set*, fd_set*, timeval*)' is deprecated (declared at /home/eventhub/.rvm/rubies/ruby-1.9.3-p125/include/ruby-1.9.1/ruby/intern.h:379) em.cpp: In member function 'bool EventMachine_t::_RunSelectOnce()': em.cpp:974: warning: 'int rb_thread_select(int, fd_set*, fd_set*, fd_set*, timeval*)' is deprecated (declared at /home/eventhub/.rvm/rubies/ruby-1.9.3-p125/include/ruby-1.9.1/ruby/intern.h:379) em.cpp:974: warning: 'int rb_thread_select(int, fd_set*, fd_set*, fd_set*, timeval*)' is deprecated (declared at /home/eventhub/.rvm/rubies/ruby-1.9.3-p125/include/ruby-1.9.1/ruby/intern.h:379) em.cpp: At global scope: em.cpp:1057: warning: type qualifiers ignored on function return type em.cpp:1079: warning: type qualifiers ignored on function return type em.cpp:1265: warning: type qualifiers ignored on function return type em.cpp:1338: warning: type qualifiers ignored on function return type em.cpp:1510: warning: type qualifiers ignored on function return type em.cpp:1593: warning: type qualifiers ignored on function return type em.cpp:1856: warning: type qualifiers ignored on function return type em.cpp:1982: warning: type qualifiers ignored on function return type em.cpp:2046: warning: type qualifiers ignored on function return type em.cpp:2070: warning: type qualifiers ignored on function return type em.cpp:2142: warning: type qualifiers ignored on function return type em.cpp:2361: fatal error: error writing to /tmp/ccdlOK0T.s: No space left on device compilation terminated. make: *** [em.o] Error 1 Gem files will remain installed in /home/eventhub/.rvm/gems/ruby-1.9.3-p125/gems/eventmachine-1.0.1 for inspection. Results logged to /home/eventhub/.rvm/gems/ruby-1.9.3-p125/gems/eventmachine-1.0.1/ext/gem_make.out Any thoughts? I read a lot of different ways to solve this issue, but none of them worked. Thanks

    Read the article

< Previous Page | 176 177 178 179 180 181  | Next Page >