Search Results

Search found 6515 results on 261 pages for 'half life'.

Page 260/261 | < Previous Page | 256 257 258 259 260 261  | Next Page >

  • Active Directory and Apple's Workgroup Manager

    - by qbn
    I thought I'd share my experiences here. I work for a small business with only ~20 users. I wanted the ability to use managed client preferences to assign things like the software update server. Basically the ability to manage my Macs easily and in a native way. At first I tried the magic triangle solution, but I found this to be very complicated. Not only does it require a Mac OS X Server, but it gives you two points of failure. Additionally each Mac workstation must be bound to both servers. Eventually I sucked it up and went with the schema changes documented here. I was hesitant at first, because the instructions require a lot of manual work. However it was fairly basic and only took me about an hour and a half. Below you'll find the schema changes file that was a result of my work. I followed the instructions exactly and double checked everything, after six months of having this in place things have been running great. Too good to not share. I hope I save someone a couple of hours. # ================================================================== # # This file should be imported with the following command: # ldifde -i -u -f Apple AD Schema Changes.ldf -s server:port -b username domain password -j . -c "cn=Configuration,dc=X" #configurationNamingContext # LDIFDE.EXE from AD/AM V1.0 or above must be used. # This LDIF file should be imported into AD or AD/AM. It may not work for other directories. # # ================================================================== # ================================================================== # Attributes # ================================================================== # Attribute: apple-category dn: cn=apple-category,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.10.4 ldapDisplayName: apple-category attributeSyntax: 2.5.5.12 adminDescription: Category for the computer or neighborhood oMSyntax: 64 systemOnly: FALSE # Attribute: apple-computeralias dn: cn=apple-computeralias,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.20.3 ldapDisplayName: apple-computeralias attributeSyntax: 2.5.5.12 adminDescription: XML plist referring to a computer record oMSyntax: 64 systemOnly: FALSE # Attribute: apple-computer-list-groups dn: cn=apple-computer-list-groups,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.11.4 ldapDisplayName: apple-computer-list-groups attributeSyntax: 2.5.5.12 adminDescription: groups oMSyntax: 64 systemOnly: FALSE # Attribute: apple-computers dn: cn=apple-computers,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.11.3 ldapDisplayName: apple-computers attributeSyntax: 2.5.5.12 adminDescription: computers oMSyntax: 64 systemOnly: FALSE # Attribute: apple-data-stamp dn: cn=apple-data-stamp,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.12.2 ldapDisplayName: apple-data-stamp attributeSyntax: 2.5.5.5 adminDescription: data stamp oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-dns-domain dn: cn=apple-dns-domain,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.18.1 ldapDisplayName: apple-dns-domain attributeSyntax: 2.5.5.12 adminDescription: DNS domain oMSyntax: 64 systemOnly: FALSE # Attribute: apple-dnsname dn: cn=apple-dnsname,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.19.4 ldapDisplayName: apple-dnsname attributeSyntax: 2.5.5.12 adminDescription: DNS name oMSyntax: 64 systemOnly: FALSE # Attribute: apple-dns-nameserver dn: cn=apple-dns-nameserver,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.18.2 ldapDisplayName: apple-dns-nameserver attributeSyntax: 2.5.5.12 adminDescription: DNS name server list oMSyntax: 64 systemOnly: FALSE # Attribute: apple-group-homeowner dn: cn=apple-group-homeowner,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.14.2 ldapDisplayName: apple-group-homeowner attributeSyntax: 2.5.5.5 adminDescription: group home owner settings oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-group-homeurl dn: cn=apple-group-homeurl,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.14.1 ldapDisplayName: apple-group-homeurl attributeSyntax: 2.5.5.5 adminDescription: group home url oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-imhandle dn: cn=apple-imhandle,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.21 ldapDisplayName: apple-imhandle attributeSyntax: 2.5.5.12 adminDescription: IM handle (service:account name) oMSyntax: 64 systemOnly: FALSE # Attribute: apple-keyword dn: cn=apple-keyword,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.19 ldapDisplayName: apple-keyword attributeSyntax: 2.5.5.12 adminDescription: keywords oMSyntax: 64 systemOnly: FALSE # Attribute: apple-mcxflags dn: cn=apple-mcxflags,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.10 ldapDisplayName: apple-mcxflags attributeSyntax: 2.5.5.12 adminDescription: mcx flags oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-mcxsettings dn: cn=apple-mcxsettings,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.16 ldapDisplayName: apple-mcxsettings attributeSyntax: 2.5.5.12 adminDescription: mcx settings oMSyntax: 64 systemOnly: FALSE # Attribute: apple-neighborhoodalias dn: cn=apple-neighborhoodalias,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.20.2 ldapDisplayName: apple-neighborhoodalias attributeSyntax: 2.5.5.12 adminDescription: XML plist referring to another neighborhood record oMSyntax: 64 systemOnly: FALSE # Attribute: apple-networkview dn: cn=apple-networkview,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.10.3 ldapDisplayName: apple-networkview attributeSyntax: 2.5.5.12 adminDescription: Network view for the computer oMSyntax: 64 systemOnly: FALSE # Attribute: apple-nodepathxml dn: cn=apple-nodepathxml,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.20.1 ldapDisplayName: apple-nodepathxml attributeSyntax: 2.5.5.12 adminDescription: XML plist of directory node path oMSyntax: 64 systemOnly: FALSE # Attribute: apple-service-location dn: cn=apple-service-location,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.19.5 ldapDisplayName: apple-service-location attributeSyntax: 2.5.5.12 adminDescription: Service location oMSyntax: 64 systemOnly: FALSE # Attribute: apple-service-port dn: cn=apple-service-port,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.19.3 ldapDisplayName: apple-service-port attributeSyntax: 2.5.5.9 adminDescription: Service port number oMSyntax: 2 systemOnly: FALSE # Attribute: apple-service-type dn: cn=apple-service-type,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.19.1 ldapDisplayName: apple-service-type attributeSyntax: 2.5.5.5 adminDescription: type of service oMSyntax: 22 systemOnly: FALSE # Attribute: apple-service-url dn: cn=apple-service-url,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.19.2 ldapDisplayName: apple-service-url attributeSyntax: 2.5.5.5 adminDescription: URL of service oMSyntax: 22 systemOnly: FALSE # Attribute: apple-user-authenticationhint dn: cn=apple-user-authenticationhint,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.15 ldapDisplayName: apple-user-authenticationhint attributeSyntax: 2.5.5.12 adminDescription: password hint oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-class dn: cn=apple-user-class,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.7 ldapDisplayName: apple-user-class attributeSyntax: 2.5.5.5 adminDescription: user class oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-homequota dn: cn=apple-user-homequota,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.8 ldapDisplayName: apple-user-homequota attributeSyntax: 2.5.5.5 adminDescription: home directory quota oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-homesoftquota dn: cn=apple-user-homesoftquota,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.17 ldapDisplayName: apple-user-homesoftquota attributeSyntax: 2.5.5.5 adminDescription: home directory soft quota oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-homeurl dn: cn=apple-user-homeurl,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.6 ldapDisplayName: apple-user-homeurl attributeSyntax: 2.5.5.5 adminDescription: home directory URL oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-mailattribute dn: cn=apple-user-mailattribute,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.9 ldapDisplayName: apple-user-mailattribute attributeSyntax: 2.5.5.12 adminDescription: mail attribute oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-picture dn: cn=apple-user-picture,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.12 ldapDisplayName: apple-user-picture attributeSyntax: 2.5.5.12 adminDescription: picture oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-printattribute dn: cn=apple-user-printattribute,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.13 ldapDisplayName: apple-user-printattribute attributeSyntax: 2.5.5.12 adminDescription: print attribute oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-webloguri dn: cn=apple-webloguri,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.22 ldapDisplayName: apple-webloguri attributeSyntax: 2.5.5.12 adminDescription: Weblog URI oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-xmlplist dn: cn=apple-xmlplist,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.17.1 ldapDisplayName: apple-xmlplist attributeSyntax: 2.5.5.12 adminDescription: XML plist data oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: ipHostNumber dn: cn=ipHostNumber,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.1.1.1.19 ldapDisplayName: ipHostNumber attributeSyntax: 2.5.5.5 adminDescription: IP address oMSyntax: 22 systemOnly: FALSE rangeUpper: 128 # Attribute: macAddress dn: cn=macAddress,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.1.1.1.22 ldapDisplayName: macAddress attributeSyntax: 2.5.5.5 adminDescription: MAC address oMSyntax: 22 systemOnly: FALSE rangeUpper: 128 # Attribute: mountDirectory dn: cn=apple-mountDirectory,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.8.1 ldapDisplayName: mountDirectory attributeSyntax: 2.5.5.12 adminDescription: mount path oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: mountDumpFrequency dn: cn=apple-mountDumpFrequency,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.8.4 ldapDisplayName: mountDumpFrequency attributeSyntax: 2.5.5.5 adminDescription: mount dump frequency oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: mountOption dn: cn=apple-mountOption,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.8.3 ldapDisplayName: mountOption attributeSyntax: 2.5.5.5 adminDescription: mount options oMSyntax: 22 systemOnly: FALSE # Attribute: mountPassNo dn: cn=apple-mountPassNo,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.8.5 ldapDisplayName: mountPassNo attributeSyntax: 2.5.5.5 adminDescription: mount passno oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: mountType dn: cn=apple-mountType,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.8.2 ldapDisplayName: mountType attributeSyntax: 2.5.5.5 adminDescription: mount VFS type oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: ttl dn: cn=ttl,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.250.1.60 ldapDisplayName: ttl attributeSyntax: 2.5.5.9 oMSyntax: 2 isSingleValued: TRUE systemOnly: FALSE dn: changetype: modify add: schemaUpdateNow schemaUpdateNow: 1 - # ================================================================== # Classes # ================================================================== # Class: apple-computer dn: cn=apple-computer,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.10 ldapDisplayName: apple-computer adminDescription: computer objectClassCategory: 3 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-category mayContain: 1.3.6.1.4.1.63.1000.1.1.1.10.4 # mayContain: apple-computer-list-groups mayContain: 1.3.6.1.4.1.63.1000.1.1.1.11.4 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-mcxflags mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.10 # mayContain: apple-mcxsettings mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.16 # mayContain: apple-networkview mayContain: 1.3.6.1.4.1.63.1000.1.1.1.10.3 # mayContain: apple-service-url mayContain: 1.3.6.1.4.1.63.1000.1.1.1.19.2 # mayContain: apple-xmlplist mayContain: 1.3.6.1.4.1.63.1000.1.1.1.17.1 # mayContain: macAddress mayContain: 1.3.6.1.1.1.1.22 # mayContain: ttl mayContain: 1.3.6.1.4.1.250.1.60 # Class: apple-computer-list dn: cn=apple-computer-list,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.11 ldapDisplayName: apple-computer-list adminDescription: computer list objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-computer-list-groups mayContain: 1.3.6.1.4.1.63.1000.1.1.1.11.4 # mayContain: apple-computers mayContain: 1.3.6.1.4.1.63.1000.1.1.1.11.3 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-mcxflags mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.10 # mayContain: apple-mcxsettings mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.16 possSuperiors: organizationalUnit possSuperiors: container # Class: apple-configuration dn: cn=apple-configuration,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.12 ldapDisplayName: apple-configuration adminDescription: configuration objectClassCategory: 3 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-data-stamp mayContain: 1.3.6.1.4.1.63.1000.1.1.1.12.2 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-xmlplist mayContain: 1.3.6.1.4.1.63.1000.1.1.1.17.1 # mayContain: ttl mayContain: 1.3.6.1.4.1.250.1.60 possSuperiors: organizationalUnit possSuperiors: container # Class: apple-group dn: cn=apple-group,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.14 ldapDisplayName: apple-group adminDescription: group account objectClassCategory: 3 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-group-homeowner mayContain: 1.3.6.1.4.1.63.1000.1.1.1.14.2 # mayContain: apple-group-homeurl mayContain: 1.3.6.1.4.1.63.1000.1.1.1.14.1 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-mcxflags mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.10 # mayContain: apple-mcxsettings mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.16 # mayContain: apple-user-picture mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.12 # mayContain: ttl mayContain: 1.3.6.1.4.1.250.1.60 # Class: apple-location dn: cn=apple-location,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.18 ldapDisplayName: apple-location objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-dns-domain mayContain: 1.3.6.1.4.1.63.1000.1.1.1.18.1 # mayContain: apple-dns-nameserver mayContain: 1.3.6.1.4.1.63.1000.1.1.1.18.2 possSuperiors: organizationalUnit possSuperiors: container # Class: apple-neighborhood dn: cn=apple-neighborhood,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.20 ldapDisplayName: apple-neighborhood objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-category mayContain: 1.3.6.1.4.1.63.1000.1.1.1.10.4 # mayContain: apple-computeralias mayContain: 1.3.6.1.4.1.63.1000.1.1.1.20.3 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-neighborhoodalias mayContain: 1.3.6.1.4.1.63.1000.1.1.1.20.2 # mayContain: apple-nodepathxml mayContain: 1.3.6.1.4.1.63.1000.1.1.1.20.1 # mayContain: apple-xmlplist mayContain: 1.3.6.1.4.1.63.1000.1.1.1.17.1 # mayContain: ttl mayContain: 1.3.6.1.4.1.250.1.60 possSuperiors: 2.5.6.5 possSuperiors: container # Class: apple-serverassistant-config dn: cn=apple-serverassistant-config,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.17 ldapDisplayName: apple-serverassistant-config objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-xmlplist mayContain: 1.3.6.1.4.1.63.1000.1.1.1.17.1 possSuperiors: organizationalUnit possSuperiors: container # Class: apple-service dn: cn=apple-service,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.19 ldapDisplayName: apple-service objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mustContain: apple-service-type mustContain: 1.3.6.1.4.1.63.1000.1.1.1.19.1 # mayContain: apple-dnsname mayContain: 1.3.6.1.4.1.63.1000.1.1.1.19.4 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-service-location mayContain: 1.3.6.1.4.1.63.1000.1.1.1.19.5 # mayContain: apple-service-port mayContain: 1.3.6.1.4.1.63.1000.1.1.1.19.3 # mayContain: apple-service-url mayContain: 1.3.6.1.4.1.63.1000.1.1.1.19.2 # mayContain: ipHostNumber mayContain: 1.3.6.1.1.1.1.19 possSuperiors: organizationalUnit possSuperiors: container # Class: apple-user dn: cn=apple-user,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.1 ldapDisplayName: apple-user adminDescription: apple user account objectClassCategory: 3 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-imhandle mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.21 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-mcxflags mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.10 # mayContain: apple-mcxsettings mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.16 # mayContain: apple-user-authenticationhint mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.15 # mayContain: apple-user-class mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.7 # mayContain: apple-user-homequota mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.8 # mayContain: apple-user-homesoftquota mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.17 # mayContain: apple-user-homeurl mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.6 # mayContain: apple-user-mailattribute mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.9 # mayContain: apple-user-picture mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.12 # mayContain: apple-user-printattribute mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.13 # mayContain: apple-webloguri mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.22 # Class: mount dn: cn=apple-mount,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.8 ldapDisplayName: mount objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: mountDirectory mayContain: 1.3.6.1.4.1.63.1000.1.1.1.8.1 # mayContain: mountDumpFrequency mayContain: 1.3.6.1.4.1.63.1000.1.1.1.8.4 # mayContain: mountOption mayContain: 1.3.6.1.4.1.63.1000.1.1.1.8.3 # mayContain: mountPassNo mayContain: 1.3.6.1.4.1.63.1000.1.1.1.8.5 # mayContain: mountType mayContain: 1.3.6.1.4.1.63.1000.1.1.1.8.2 possSuperiors: 2.5.6.5 possSuperiors: container dn: changetype: modify add: schemaUpdateNow schemaUpdateNow: 1 - # ================================================================== # Updating present elements # ================================================================== # Add the new class to the user object dn: CN=User,CN=Schema,CN=Configuration,DC=X changetype: modify add: auxiliaryClass auxiliaryClass: apple-user - # Add the new class to the computer object dn: CN=Computer,CN=Schema,CN=Configuration,DC=X changetype: modify add: auxiliaryClass auxiliaryClass: apple-computer - # Add the new class to the group object dn: CN=Group,CN=Schema,CN=Configuration,DC=X changetype: modify add: auxiliaryClass auxiliaryClass: apple-group - # Add the new class to the configuration object dn: CN=Configuration,CN=Schema,CN=Configuration,DC=X changetype: modify add: auxiliaryClass auxiliaryClass: apple-configuration -

    Read the article

  • Cisco SR520w FE - WAN Port Stops Working

    - by Mike Hanley
    I have setup a Cisco SR520W and everything appears to be working. After about 1-2 days, it looks like the WAN port stops forwarding traffic to the Internet gateway IP of the device. If I unplug and then plug in the network cable connecting the WAN port of the SR520W to my Comcast Cable Modem, traffic startings flowing again. Also, if I restart the SR520W, the traffic will flow again. Any ideas? Here is the running config: Current configuration : 10559 bytes ! version 12.4 no service pad no service timestamps debug uptime service timestamps log datetime msec no service password-encryption ! hostname hostname.mydomain.com ! boot-start-marker boot-end-marker ! logging message-counter syslog no logging rate-limit enable secret 5 <removed> ! aaa new-model ! ! aaa authentication login default local aaa authorization exec default local ! ! aaa session-id common clock timezone PST -8 clock summer-time PDT recurring ! crypto pki trustpoint TP-self-signed-334750407 enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-334750407 revocation-check none rsakeypair TP-self-signed-334750407 ! ! crypto pki certificate chain TP-self-signed-334750407 certificate self-signed 01 <removed> quit dot11 syslog ! dot11 ssid <removed> vlan 75 authentication open authentication key-management wpa guest-mode wpa-psk ascii 0 <removed> ! ip source-route ! ! ip dhcp excluded-address 172.16.0.1 172.16.0.10 ! ip dhcp pool inside import all network 172.16.0.0 255.240.0.0 default-router 172.16.0.1 dns-server 10.0.0.15 10.0.0.12 domain-name mydomain.com ! ! ip cef ip domain name mydomain.com ip name-server 68.87.76.178 ip name-server 66.240.48.9 ip port-map user-ezvpn-remote port udp 10000 ip ips notify SDEE ip ips name sdm_ips_rule ! ip ips signature-category category all retired true category ios_ips basic retired false ! ip inspect log drop-pkt no ipv6 cef ! multilink bundle-name authenticated parameter-map type inspect z1-z2-pmap audit-trail on password encryption aes ! ! username admin privilege 15 secret 5 <removed> ! crypto key pubkey-chain rsa named-key realm-cisco.pub key-string <removed> quit ! ! ! ! ! ! crypto ipsec client ezvpn EZVPN_REMOTE_CONNECTION_1 connect auto group EZVPN_GROUP_1 key <removed> mode client peer 64.1.208.90 virtual-interface 1 username admin password <removed> xauth userid mode local ! ! archive log config logging enable logging size 600 hidekeys ! ! ! class-map type inspect match-any SDM_AH match access-group name SDM_AH class-map type inspect match-any SDM-Voice-permit match protocol sip class-map type inspect match-any SDM_ESP match access-group name SDM_ESP class-map type inspect match-any SDM_EASY_VPN_REMOTE_TRAFFIC match protocol isakmp match protocol ipsec-msft match class-map SDM_AH match class-map SDM_ESP match protocol user-ezvpn-remote class-map type inspect match-all SDM_EASY_VPN_REMOTE_PT match class-map SDM_EASY_VPN_REMOTE_TRAFFIC match access-group 101 class-map type inspect match-any Easy_VPN_Remote_VT match access-group 102 class-map type inspect match-any sdm-cls-icmp-access match protocol icmp match protocol tcp match protocol udp class-map type inspect match-any sdm-cls-insp-traffic match protocol cuseeme match protocol dns match protocol ftp match protocol h323 match protocol https match protocol icmp match protocol imap match protocol pop3 match protocol netshow match protocol shell match protocol realmedia match protocol rtsp match protocol smtp extended match protocol sql-net match protocol streamworks match protocol tftp match protocol vdolive match protocol tcp match protocol udp class-map type inspect match-any L4-inspect-class match protocol icmp class-map type inspect match-all sdm-invalid-src match access-group 100 class-map type inspect match-all dhcp_out_self match access-group name dhcp-resp-permit class-map type inspect match-all dhcp_self_out match access-group name dhcp-req-permit class-map type inspect match-all sdm-protocol-http match protocol http ! ! policy-map type inspect sdm-permit-icmpreply class type inspect dhcp_self_out pass class type inspect sdm-cls-icmp-access inspect class class-default pass policy-map type inspect sdm-permit_VT class type inspect Easy_VPN_Remote_VT pass class class-default drop policy-map type inspect sdm-inspect class type inspect SDM-Voice-permit pass class type inspect sdm-cls-insp-traffic inspect class type inspect sdm-invalid-src drop log class type inspect sdm-protocol-http inspect z1-z2-pmap class class-default pass policy-map type inspect sdm-inspect-voip-in class type inspect SDM-Voice-permit pass class class-default drop policy-map type inspect sdm-permit class type inspect SDM_EASY_VPN_REMOTE_PT pass class type inspect dhcp_out_self pass class class-default drop ! zone security ezvpn-zone zone security out-zone zone security in-zone zone-pair security sdm-zp-in-ezvpn1 source in-zone destination ezvpn-zone service-policy type inspect sdm-permit_VT zone-pair security sdm-zp-out-ezpn1 source out-zone destination ezvpn-zone service-policy type inspect sdm-permit_VT zone-pair security sdm-zp-ezvpn-out1 source ezvpn-zone destination out-zone service-policy type inspect sdm-permit_VT zone-pair security sdm-zp-self-out source self destination out-zone service-policy type inspect sdm-permit-icmpreply zone-pair security sdm-zp-out-in source out-zone destination in-zone service-policy type inspect sdm-inspect-voip-in zone-pair security sdm-zp-ezvpn-in1 source ezvpn-zone destination in-zone service-policy type inspect sdm-permit_VT zone-pair security sdm-zp-out-self source out-zone destination self service-policy type inspect sdm-permit zone-pair security sdm-zp-in-out source in-zone destination out-zone service-policy type inspect sdm-inspect ! bridge irb ! ! interface FastEthernet0 switchport access vlan 75 ! interface FastEthernet1 switchport access vlan 75 ! interface FastEthernet2 switchport access vlan 75 ! interface FastEthernet3 switchport access vlan 75 ! interface FastEthernet4 description $FW_OUTSIDE$ ip address 75.149.48.76 255.255.255.240 ip nat outside ip ips sdm_ips_rule out ip virtual-reassembly zone-member security out-zone duplex auto speed auto crypto ipsec client ezvpn EZVPN_REMOTE_CONNECTION_1 ! interface Virtual-Template1 type tunnel no ip address ip virtual-reassembly zone-member security ezvpn-zone tunnel mode ipsec ipv4 ! interface Dot11Radio0 no ip address ! encryption vlan 75 mode ciphers aes-ccm ! ssid <removed> ! speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 12.0 18.0 24.0 36.0 48.0 54.0 station-role root ! interface Dot11Radio0.75 encapsulation dot1Q 75 native ip virtual-reassembly bridge-group 75 bridge-group 75 subscriber-loop-control bridge-group 75 spanning-disabled bridge-group 75 block-unknown-source no bridge-group 75 source-learning no bridge-group 75 unicast-flooding ! interface Vlan1 no ip address ip virtual-reassembly bridge-group 1 ! interface Vlan75 no ip address ip virtual-reassembly bridge-group 75 bridge-group 75 spanning-disabled ! interface BVI1 no ip address ip nat inside ip virtual-reassembly ! interface BVI75 description $FW_INSIDE$ ip address 172.16.0.1 255.240.0.0 ip nat inside ip ips sdm_ips_rule in ip virtual-reassembly zone-member security in-zone crypto ipsec client ezvpn EZVPN_REMOTE_CONNECTION_1 inside ! ip forward-protocol nd ip route 0.0.0.0 0.0.0.0 75.149.48.78 2 ! ip http server ip http authentication local ip http secure-server ip http timeout-policy idle 60 life 86400 requests 10000 ip nat inside source list 1 interface FastEthernet4 overload ! ip access-list extended SDM_AH remark SDM_ACL Category=1 permit ahp any any ip access-list extended SDM_ESP remark SDM_ACL Category=1 permit esp any any ip access-list extended dhcp-req-permit remark SDM_ACL Category=1 permit udp any eq bootpc any eq bootps ip access-list extended dhcp-resp-permit remark SDM_ACL Category=1 permit udp any eq bootps any eq bootpc ! access-list 1 remark SDM_ACL Category=2 access-list 1 permit 172.16.0.0 0.15.255.255 access-list 100 remark SDM_ACL Category=128 access-list 100 permit ip host 255.255.255.255 any access-list 100 permit ip 127.0.0.0 0.255.255.255 any access-list 100 permit ip 75.149.48.64 0.0.0.15 any access-list 101 remark SDM_ACL Category=128 access-list 101 permit ip host 64.1.208.90 any access-list 102 remark SDM_ACL Category=1 access-list 102 permit ip any any ! ! ! ! snmp-server community <removed> RO ! control-plane ! bridge 1 protocol ieee bridge 1 route ip bridge 75 route ip banner login ^CSR520 Base Config - MFG 1.0 ^C ! line con 0 no modem enable line aux 0 line vty 0 4 transport input telnet ssh ! scheduler max-task-time 5000 end I also ran some diagnostics when the WAN port stopped working: 1. show interface fa4 FastEthernet4 is up, line protocol is up Hardware is PQUICC_FEC, address is 0026.99c5.b434 (bia 0026.99c5.b434) Description: $FW_OUTSIDE$ Internet address is 75.149.48.76/28 MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s, 100BaseTX/FX ARP type: ARPA, ARP Timeout 04:00:00 Last input 01:08:15, output 00:00:00, output hang never Last clearing of "show interface" counters never Input queue: 0/75/23/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 1000 bits/sec, 0 packets/sec 336446 packets input, 455403158 bytes Received 23 broadcasts, 0 runts, 0 giants, 37 throttles 41 input errors, 0 CRC, 0 frame, 0 overrun, 41 ignored 0 watchdog 0 input packets with dribble condition detected 172529 packets output, 23580132 bytes, 0 underruns 0 output errors, 0 collisions, 2 interface resets 0 unknown protocol drops 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out 2. show ip route Gateway of last resort is 75.149.48.78 to network 0.0.0.0 C 192.168.75.0/24 is directly connected, BVI75 64.0.0.0/32 is subnetted, 1 subnets S 64.1.208.90 [1/0] via 75.149.48.78 S 192.168.10.0/24 is directly connected, BVI75 75.0.0.0/28 is subnetted, 1 subnets C 75.149.48.64 is directly connected, FastEthernet4 S* 0.0.0.0/0 [2/0] via 75.149.48.78 3. show ip arp Protocol Address Age (min) Hardware Addr Type Interface Internet 75.149.48.65 69 001e.2a39.7b08 ARPA FastEthernet4 Internet 75.149.48.76 - 0026.99c5.b434 ARPA FastEthernet4 Internet 75.149.48.78 93 0022.2d6c.ae36 ARPA FastEthernet4 Internet 192.168.75.1 - 0027.0d58.f5f0 ARPA BVI75 Internet 192.168.75.12 50 7c6d.62c7.8c0a ARPA BVI75 Internet 192.168.75.13 0 001b.6301.1227 ARPA BVI75 4. sh ip cef Prefix Next Hop Interface 0.0.0.0/0 75.149.48.78 FastEthernet4 0.0.0.0/8 drop 0.0.0.0/32 receive 64.1.208.90/32 75.149.48.78 FastEthernet4 75.149.48.64/28 attached FastEthernet4 75.149.48.64/32 receive FastEthernet4 75.149.48.65/32 attached FastEthernet4 75.149.48.76/32 receive FastEthernet4 75.149.48.78/32 attached FastEthernet4 75.149.48.79/32 receive FastEthernet4 127.0.0.0/8 drop 192.168.10.0/24 attached BVI75 192.168.75.0/24 attached BVI75 192.168.75.0/32 receive BVI75 192.168.75.1/32 receive BVI75 192.168.75.12/32 attached BVI75 192.168.75.13/32 attached BVI75 192.168.75.255/32 receive BVI75 224.0.0.0/4 drop 224.0.0.0/24 receive 240.0.0.0/4 drop 255.255.255.255/32 receive Thanks in advance, -Mike

    Read the article

  • Cisco 891w multiple VLAN configuration

    - by Jessica
    I'm having trouble getting my guest network up. I have VLAN 1 that contains all our network resources (servers, desktops, printers, etc). I have the wireless configured to use VLAN1 but authenticate with wpa2 enterprise. The guest network I just wanted to be open or configured with a simple WPA2 personal password on it's own VLAN2. I've looked at tons of documentation and it should be working but I can't even authenticate on the guest network! I've posted this on cisco's support forum a week ago but no one has really responded. I could really use some help. So if anyone could take a look at the configurations I posted and steer me in the right direction I would be extremely grateful. Thank you! version 15.0 service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption ! hostname ESI ! boot-start-marker boot-end-marker ! logging buffered 51200 warnings ! aaa new-model ! ! aaa authentication login userauthen local aaa authorization network groupauthor local ! ! ! ! ! aaa session-id common ! ! ! clock timezone EST -5 clock summer-time EDT recurring service-module wlan-ap 0 bootimage autonomous ! crypto pki trustpoint TP-self-signed-3369945891 enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-3369945891 revocation-check none rsakeypair TP-self-signed-3369945891 ! ! crypto pki certificate chain TP-self-signed-3369945891 certificate self-signed 01 (cert is here) quit ip source-route ! ! ip dhcp excluded-address 192.168.1.1 ip dhcp excluded-address 192.168.1.5 ip dhcp excluded-address 192.168.1.2 ip dhcp excluded-address 192.168.1.200 192.168.1.210 ip dhcp excluded-address 192.168.1.6 ip dhcp excluded-address 192.168.1.8 ip dhcp excluded-address 192.168.3.1 ! ip dhcp pool ccp-pool import all network 192.168.1.0 255.255.255.0 default-router 192.168.1.1 dns-server 10.171.12.5 10.171.12.37 lease 0 2 ! ip dhcp pool guest import all network 192.168.3.0 255.255.255.0 default-router 192.168.3.1 dns-server 10.171.12.5 10.171.12.37 ! ! ip cef no ip domain lookup no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO891W-AGN-A-K9 sn FTX153085WL ! ! username ESIadmin privilege 15 secret 5 $1$g1..$JSZ0qxljZAgJJIk/anDu51 username user1 password 0 pass ! ! ! class-map type inspect match-any ccp-cls-insp-traffic match protocol cuseeme match protocol dns match protocol ftp match protocol h323 match protocol https match protocol icmp match protocol imap match protocol pop3 match protocol netshow match protocol shell match protocol realmedia match protocol rtsp match protocol smtp match protocol sql-net match protocol streamworks match protocol tftp match protocol vdolive match protocol tcp match protocol udp class-map type inspect match-all ccp-insp-traffic match class-map ccp-cls-insp-traffic class-map type inspect match-any ccp-cls-icmp-access match protocol icmp class-map type inspect match-all ccp-invalid-src match access-group 100 class-map type inspect match-all ccp-icmp-access match class-map ccp-cls-icmp-access class-map type inspect match-all ccp-protocol-http match protocol http ! ! policy-map type inspect ccp-permit-icmpreply class type inspect ccp-icmp-access inspect class class-default pass policy-map type inspect ccp-inspect class type inspect ccp-invalid-src drop log class type inspect ccp-protocol-http inspect class type inspect ccp-insp-traffic inspect class class-default drop policy-map type inspect ccp-permit class class-default drop ! zone security out-zone zone security in-zone zone-pair security ccp-zp-self-out source self destination out-zone service-policy type inspect ccp-permit-icmpreply zone-pair security ccp-zp-in-out source in-zone destination out-zone service-policy type inspect ccp-inspect zone-pair security ccp-zp-out-self source out-zone destination self service-policy type inspect ccp-permit ! ! crypto isakmp policy 1 encr 3des authentication pre-share group 2 ! crypto isakmp client configuration group 3000client key 67Nif8LLmqP_ dns 10.171.12.37 10.171.12.5 pool dynpool acl 101 ! ! crypto ipsec transform-set myset esp-3des esp-sha-hmac ! crypto dynamic-map dynmap 10 set transform-set myset ! ! crypto map clientmap client authentication list userauthen crypto map clientmap isakmp authorization list groupauthor crypto map clientmap client configuration address initiate crypto map clientmap client configuration address respond crypto map clientmap 10 ipsec-isakmp dynamic dynmap ! ! ! ! ! interface FastEthernet0 ! ! interface FastEthernet1 ! ! interface FastEthernet2 ! ! interface FastEthernet3 ! ! interface FastEthernet4 ! ! interface FastEthernet5 ! ! interface FastEthernet6 ! ! interface FastEthernet7 ! ! interface FastEthernet8 ip address dhcp ip nat outside ip virtual-reassembly duplex auto speed auto ! ! interface GigabitEthernet0 description $FW_OUTSIDE$$ES_WAN$ ip address 10...* 255.255.254.0 ip nat outside ip virtual-reassembly zone-member security out-zone duplex auto speed auto crypto map clientmap ! ! interface wlan-ap0 description Service module interface to manage the embedded AP ip unnumbered Vlan1 arp timeout 0 ! ! interface Wlan-GigabitEthernet0 description Internal switch interface connecting to the embedded AP switchport trunk allowed vlan 1-3,1002-1005 switchport mode trunk ! ! interface Vlan1 description $ETH-SW-LAUNCH$$INTF-INFO-FE 1$$FW_INSIDE$ ip address 192.168.1.1 255.255.255.0 ip nat inside ip virtual-reassembly zone-member security in-zone ip tcp adjust-mss 1452 crypto map clientmap ! ! interface Vlan2 description guest ip address 192.168.3.1 255.255.255.0 ip access-group 120 in ip nat inside ip virtual-reassembly zone-member security in-zone ! ! interface Async1 no ip address encapsulation slip ! ! ip local pool dynpool 192.168.1.200 192.168.1.210 ip forward-protocol nd ip http server ip http access-class 23 ip http authentication local ip http secure-server ip http timeout-policy idle 60 life 86400 requests 10000 ! ! ip dns server ip nat inside source list 23 interface GigabitEthernet0 overload ip route 0.0.0.0 0.0.0.0 10.165.0.1 ! access-list 23 permit 192.168.1.0 0.0.0.255 access-list 100 remark CCP_ACL Category=128 access-list 100 permit ip host 255.255.255.255 any access-list 100 permit ip 127.0.0.0 0.255.255.255 any access-list 100 permit ip 10.165.0.0 0.0.1.255 any access-list 110 permit ip 192.168.0.0 0.0.5.255 any access-list 120 remark ESIGuest Restriction no cdp run ! ! ! ! ! ! control-plane ! ! alias exec dot11radio service-module wlan-ap 0 session Access point version 12.4 no service pad service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption ! hostname ESIRouter ! no logging console enable secret 5 $1$yEH5$CxI5.9ypCBa6kXrUnSuvp1 ! aaa new-model ! ! aaa group server radius rad_eap server 192.168.1.5 auth-port 1812 acct-port 1813 ! aaa group server radius rad_acct server 192.168.1.5 auth-port 1812 acct-port 1813 ! aaa authentication login eap_methods group rad_eap aaa authentication enable default line enable aaa authorization exec default local aaa authorization commands 15 default local aaa accounting network acct_methods start-stop group rad_acct ! aaa session-id common clock timezone EST -5 clock summer-time EDT recurring ip domain name ESI ! ! dot11 syslog dot11 vlan-name one vlan 1 dot11 vlan-name two vlan 2 ! dot11 ssid one vlan 1 authentication open eap eap_methods authentication network-eap eap_methods authentication key-management wpa version 2 accounting rad_acct ! dot11 ssid two vlan 2 authentication open guest-mode ! dot11 network-map ! ! username ESIadmin privilege 15 secret 5 $1$p02C$WVHr5yKtRtQxuFxPU8NOx. ! ! bridge irb ! ! interface Dot11Radio0 no ip address no ip route-cache ! encryption vlan 1 mode ciphers aes-ccm ! broadcast-key vlan 1 change 30 ! ! ssid one ! ssid two ! antenna gain 0 station-role root ! interface Dot11Radio0.1 encapsulation dot1Q 1 native no ip route-cache bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding bridge-group 1 spanning-disabled ! interface Dot11Radio0.2 encapsulation dot1Q 2 no ip route-cache bridge-group 2 bridge-group 2 subscriber-loop-control bridge-group 2 block-unknown-source no bridge-group 2 source-learning no bridge-group 2 unicast-flooding bridge-group 2 spanning-disabled ! interface Dot11Radio1 no ip address no ip route-cache shutdown ! encryption vlan 1 mode ciphers aes-ccm ! broadcast-key vlan 1 change 30 ! ! ssid one ! antenna gain 0 dfs band 3 block channel dfs station-role root ! interface Dot11Radio1.1 encapsulation dot1Q 1 native no ip route-cache bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding bridge-group 1 spanning-disabled ! interface GigabitEthernet0 description the embedded AP GigabitEthernet 0 is an internal interface connecting AP with the host router no ip address no ip route-cache ! interface GigabitEthernet0.1 encapsulation dot1Q 1 native no ip route-cache bridge-group 1 no bridge-group 1 source-learning bridge-group 1 spanning-disabled ! interface GigabitEthernet0.2 encapsulation dot1Q 2 no ip route-cache bridge-group 2 no bridge-group 2 source-learning bridge-group 2 spanning-disabled ! interface BVI1 ip address 192.168.1.2 255.255.255.0 no ip route-cache ! ip http server no ip http secure-server ip http help-path http://www.cisco.com/warp/public/779/smbiz/prodconfig/help/eag access-list 10 permit 192.168.1.0 0.0.0.255 radius-server host 192.168.1.5 auth-port 1812 acct-port 1813 key ***** bridge 1 route ip

    Read the article

  • PC hangs and reboots from time to time

    - by Bevor
    Hello, I have a very strange problem: Since I have my new PC, I have always had problems with it. From time to time the computer freezes for some seconds and suddendly reboots by itself. I've had this problem since Ubuntu 9.10. The same with 10.04 and 10.10. That's why I don't think it's a software failure because the problem persist too long. It doesn't have anything to do with what I'm doing at this time. Sometimes I listen to music, sometimes I only use Firefox, sometimes I'm running 2 or 3 VMs, sometimes I watch DVD. So it's not isolatable. I could freeze once a day or once a week. I put the PC to the vendor twice(!). The first time they changed my power supply but the problem persisted. The second time they told me that they made some heavy performance tests 50 hours long but they didn't find anything. (How can that be that I have daily freezes with normal usage). The vendor didn't check the hard discs because they used their own disc with Windows. (So they never checked the Linux installation). Yesterday I made some intensive hard disc scans with "SMART" but no errors were found. I ran memtest for 3 times but no errors found. I already had this problem in my old flat, so I doubt that I has something to do with current fluctuation. I already tried another electrical socket and changed to connector strip but the problem persists. At the moment I removed 2 of the RAMs (2x 2GB). In all I have 6GB, 2x2GB and 2x1GB. Could this difference maybe be a problem? Here is a list of my components. I hope that anybody find something I didn't think about yet. And here a list of my components: 1x AMD Phenom II X4 965 Black Edition, 3,4Ghz, Quad Core, S-AM3, Boxed 2x DDR3-RAM 2048MB, PC3-1333 Mhz, CL9, Kingston ValueRAM 2x DDR3-RAM 1024MB, PC3-1333 Mhz, CL9, Kingston ValueRAM 2x SATA II Seagate Barracuda 7200.12, 1TB 32MB Cache = RAID 1 1x DVD ROM SATA LG DH16NSR, 16x/52x 1x DVD-+R/-+RW SATA LG GH-22NS50 1x Cardreader 18in1 1x PCI-E 2.0 GeForce GTS 250, Retail, 1024MB 1x Power Supply ATX 400 Watt, CHIEFTEC APS-400S, 80 Plus 1x Network card PCI Intel PRO/1000GT 10/100/1000 MBit 1x Mainboard Socket-AM3 ASUS M4A79XTD EVO, ATX lshw: description: Desktop Computer product: System Product Name vendor: System manufacturer version: System Version serial: System Serial Number width: 64 bits capabilities: smbios-2.5 dmi-2.5 vsyscall64 vsyscall32 configuration: boot=normal chassis=desktop uuid=80E4001E-8C00-002C-AA59-E0CB4EBAC29A *-core description: Motherboard product: M4A79XTD EVO vendor: ASUSTeK Computer INC. physical id: 0 version: Rev X.0X serial: MT709CK11101196 slot: To Be Filled By O.E.M. *-firmware description: BIOS vendor: American Megatrends Inc. physical id: 0 version: 0704 (11/25/2009) size: 64KiB capacity: 960KiB capabilities: isa pci pnp apm upgrade shadowing escd cdboot bootselect socketedrom edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int9keyboard int14serial int17printer int10video acpi usb ls120boot zipboot biosbootspecification *-cpu description: CPU product: AMD Phenom(tm) II X4 965 Processor vendor: Advanced Micro Devices [AMD] physical id: 4 bus info: cpu@0 version: AMD Phenom(tm) II X4 965 Processor serial: To Be Filled By O.E.M. slot: AM3 size: 800MHz capacity: 3400MHz width: 64 bits clock: 200MHz capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp x86-64 3dnowext 3dnow constant_tsc rep_good nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt npt lbrv svm_lock nrip_save cpufreq *-cache:0 description: L1 cache physical id: 5 slot: L1-Cache size: 512KiB capacity: 512KiB capabilities: pipeline-burst internal varies data *-cache:1 description: L2 cache physical id: 6 slot: L2-Cache size: 2MiB capacity: 2MiB capabilities: pipeline-burst internal varies unified *-cache:2 description: L3 cache physical id: 7 slot: L3-Cache size: 6MiB capacity: 6MiB capabilities: pipeline-burst internal varies unified *-memory description: System Memory physical id: 36 slot: System board or motherboard size: 2GiB *-bank:0 description: DIMM Synchronous 1333 MHz (0.8 ns) product: ModulePartNumber00 vendor: Manufacturer00 physical id: 0 serial: SerNum00 slot: DIMM0 size: 1GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:1 description: DIMM Synchronous 1333 MHz (0.8 ns) product: ModulePartNumber01 vendor: Manufacturer01 physical id: 1 serial: SerNum01 slot: DIMM1 size: 1GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:2 description: DIMM [empty] product: ModulePartNumber02 vendor: Manufacturer02 physical id: 2 serial: SerNum02 slot: DIMM2 *-bank:3 description: DIMM [empty] product: ModulePartNumber03 vendor: Manufacturer03 physical id: 3 serial: SerNum03 slot: DIMM3 *-pci:0 description: Host bridge product: RD780 Northbridge only dual slot PCI-e_GFX and HT1 K8 part vendor: ATI Technologies Inc physical id: 100 bus info: pci@0000:00:00.0 version: 00 width: 32 bits clock: 66MHz *-pci:0 description: PCI bridge product: RD790 PCI to PCI bridge (external gfx0 port A) vendor: ATI Technologies Inc physical id: 2 bus info: pci@0000:00:02.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:40 ioport:a000(size=4096) memory:f8000000-fbbfffff ioport:d0000000(size=268435456) *-display description: VGA compatible controller product: G92 [GeForce GTS 250] vendor: nVidia Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a2 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:18 memory:fa000000-faffffff memory:d0000000-dfffffff memory:f8000000-f9ffffff ioport:ac00(size=128) memory:fbbe0000-fbbfffff *-pci:1 description: PCI bridge product: RD790 PCI to PCI bridge (PCI express gpp port C) vendor: ATI Technologies Inc physical id: 6 bus info: pci@0000:00:06.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:41 ioport:b000(size=4096) memory:fbc00000-fbcfffff ioport:f6f00000(size=1048576) *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 03 serial: e0:cb:4e:ba:c2:9a size: 10MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10MB/s resources: irq:45 ioport:b800(size=256) memory:f6fff000-f6ffffff memory:f6ff8000-f6ffbfff memory:fbcf0000-fbcfffff *-pci:2 description: PCI bridge product: RD790 PCI to PCI bridge (PCI express gpp port D) vendor: ATI Technologies Inc physical id: 7 bus info: pci@0000:00:07.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:42 ioport:c000(size=4096) memory:fbd00000-fbdfffff *-firewire description: FireWire (IEEE 1394) product: VT6315 Series Firewire Controller vendor: VIA Technologies, Inc. physical id: 0 bus info: pci@0000:03:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress ohci bus_master cap_list configuration: driver=firewire_ohci latency=0 resources: irq:19 memory:fbdff800-fbdfffff ioport:c800(size=256) *-pci:3 description: PCI bridge product: RD790 PCI to PCI bridge (PCI express gpp port E) vendor: ATI Technologies Inc physical id: 9 bus info: pci@0000:00:09.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:43 ioport:d000(size=4096) memory:fbe00000-fbefffff *-ide description: IDE interface product: 88SE6121 SATA II Controller vendor: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:04:00.0 version: b2 width: 32 bits clock: 33MHz capabilities: ide pm msi pciexpress bus_master cap_list configuration: driver=pata_marvell latency=0 resources: irq:17 ioport:dc00(size=8) ioport:d880(size=4) ioport:d800(size=8) ioport:d480(size=4) ioport:d400(size=16) memory:fbeffc00-fbefffff *-storage description: SATA controller product: SB700/SB800 SATA Controller [IDE mode] vendor: ATI Technologies Inc physical id: 11 bus info: pci@0000:00:11.0 logical name: scsi0 logical name: scsi2 version: 00 width: 32 bits clock: 66MHz capabilities: storage msi ahci_1.0 bus_master cap_list emulated configuration: driver=ahci latency=64 resources: irq:44 ioport:9000(size=8) ioport:8000(size=4) ioport:7000(size=8) ioport:6000(size=4) ioport:5000(size=16) memory:f7fffc00-f7ffffff *-disk:0 description: ATA Disk product: ST31000528AS vendor: Seagate physical id: 0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: CC38 serial: 9VP3WD9Z size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=000ad206 *-volume:0 UNCLAIMED description: Linux filesystem partition vendor: Linux physical id: 1 bus info: scsi@0:0.0.0,1 version: 1.0 serial: 81839235-21ea-4853-90a4-814779f49000 size: 972MiB capacity: 972MiB capabilities: primary ext2 initialized configuration: filesystem=ext2 modified=2010-12-06 18:32:58 mounted=2010-11-01 07:05:10 state=unknown *-volume:1 UNCLAIMED description: Linux swap volume physical id: 2 bus info: scsi@0:0.0.0,2 version: 1 serial: 22b881d5-6f5c-484d-94e8-e231896fa91b size: 486MiB capacity: 486MiB capabilities: primary nofs swap initialized configuration: filesystem=swap pagesize=4096 *-volume:2 UNCLAIMED description: EXT3 volume vendor: Linux physical id: 3 bus info: scsi@0:0.0.0,3 version: 1.0 serial: ad5b0daf-11e8-4f8f-8598-4e89da9c0d84 size: 47GiB capacity: 47GiB capabilities: primary journaled extended_attributes large_files recover ext3 ext2 initialized configuration: created=2010-02-16 20:42:29 filesystem=ext3 modified=2010-11-29 17:02:34 mounted=2010-12-06 18:32:50 state=clean *-volume:3 UNCLAIMED description: Extended partition physical id: 4 bus info: scsi@0:0.0.0,4 size: 882GiB capacity: 882GiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume UNCLAIMED description: Linux filesystem partition physical id: 5 capacity: 882GiB *-disk:1 description: ATA Disk product: ST31000528AS vendor: Seagate physical id: 1 bus info: scsi@2:0.0.0 logical name: /dev/sdb version: CC38 serial: 9VP3SCPF size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=000ad206 *-volume:0 UNCLAIMED description: Linux filesystem partition vendor: Linux physical id: 1 bus info: scsi@2:0.0.0,1 version: 1.0 serial: 81839235-21ea-4853-90a4-814779f49000 size: 972MiB capacity: 972MiB capabilities: primary ext2 initialized configuration: filesystem=ext2 modified=2010-12-06 18:32:58 mounted=2010-11-01 07:05:10 state=unknown *-volume:1 UNCLAIMED description: Linux swap volume physical id: 2 bus info: scsi@2:0.0.0,2 version: 1 serial: 22b881d5-6f5c-484d-94e8-e231896fa91b size: 486MiB capacity: 486MiB capabilities: primary nofs swap initialized configuration: filesystem=swap pagesize=4096 *-volume:2 UNCLAIMED description: EXT3 volume vendor: Linux physical id: 3 bus info: scsi@2:0.0.0,3 version: 1.0 serial: ad5b0daf-11e8-4f8f-8598-4e89da9c0d84 size: 47GiB capacity: 47GiB capabilities: primary journaled extended_attributes large_files recover ext3 ext2 initialized configuration: created=2010-02-16 20:42:29 filesystem=ext3 modified=2010-11-29 17:02:34 mounted=2010-12-06 18:32:50 state=clean *-volume:3 UNCLAIMED description: Extended partition physical id: 4 bus info: scsi@2:0.0.0,4 size: 882GiB capacity: 882GiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume UNCLAIMED description: Linux filesystem partition physical id: 5 capacity: 882GiB *-usb:0 description: USB Controller product: SB700/SB800 USB OHCI0 Controller vendor: ATI Technologies Inc physical id: 12 bus info: pci@0000:00:12.0 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:16 memory:f7ffd000-f7ffdfff *-usb:1 description: USB Controller product: SB700 USB OHCI1 Controller vendor: ATI Technologies Inc physical id: 12.1 bus info: pci@0000:00:12.1 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:16 memory:f7ffe000-f7ffefff *-usb:2 description: USB Controller product: SB700/SB800 USB EHCI Controller vendor: ATI Technologies Inc physical id: 12.2 bus info: pci@0000:00:12.2 version: 00 width: 32 bits clock: 66MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=64 resources: irq:17 memory:f7fff800-f7fff8ff *-usb:3 description: USB Controller product: SB700/SB800 USB OHCI0 Controller vendor: ATI Technologies Inc physical id: 13 bus info: pci@0000:00:13.0 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:18 memory:f7ffb000-f7ffbfff *-usb:4 description: USB Controller product: SB700 USB OHCI1 Controller vendor: ATI Technologies Inc physical id: 13.1 bus info: pci@0000:00:13.1 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:18 memory:f7ffc000-f7ffcfff *-usb:5 description: USB Controller product: SB700/SB800 USB EHCI Controller vendor: ATI Technologies Inc physical id: 13.2 bus info: pci@0000:00:13.2 version: 00 width: 32 bits clock: 66MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=64 resources: irq:19 memory:f7fff400-f7fff4ff *-serial UNCLAIMED description: SMBus product: SBx00 SMBus Controller vendor: ATI Technologies Inc physical id: 14 bus info: pci@0000:00:14.0 version: 3c width: 32 bits clock: 66MHz capabilities: ht cap_list configuration: latency=0 *-ide description: IDE interface product: SB700/SB800 IDE Controller vendor: ATI Technologies Inc physical id: 14.1 bus info: pci@0000:00:14.1 logical name: scsi5 version: 00 width: 32 bits clock: 66MHz capabilities: ide msi bus_master cap_list emulated configuration: driver=pata_atiixp latency=64 resources: irq:16 ioport:1f0(size=8) ioport:3f6 ioport:170(size=8) ioport:376 ioport:ff00(size=16) *-cdrom:0 description: DVD reader product: DVDROM DH16NS30 vendor: HL-DT-ST physical id: 0.0.0 bus info: scsi@5:0.0.0 logical name: /dev/cdrom1 logical name: /dev/dvd1 logical name: /dev/scd0 logical name: /dev/sr0 version: 1.00 capabilities: removable audio dvd configuration: ansiversion=5 status=nodisc *-cdrom:1 description: DVD-RAM writer product: DVDRAM GH22NS50 vendor: HL-DT-ST physical id: 0.1.0 bus info: scsi@5:0.1.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/scd1 logical name: /dev/sr1 version: TN02 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc *-multimedia description: Audio device product: SBx00 Azalia (Intel HDA) vendor: ATI Technologies Inc physical id: 14.2 bus info: pci@0000:00:14.2 version: 00 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: driver=HDA Intel latency=64 resources: irq:16 memory:f7ff4000-f7ff7fff *-isa description: ISA bridge product: SB700/SB800 LPC host controller vendor: ATI Technologies Inc physical id: 14.3 bus info: pci@0000:00:14.3 version: 00 width: 32 bits clock: 66MHz capabilities: isa bus_master configuration: latency=0 *-pci:4 description: PCI bridge product: SBx00 PCI to PCI Bridge vendor: ATI Technologies Inc physical id: 14.4 bus info: pci@0000:00:14.4 version: 00 width: 32 bits clock: 66MHz capabilities: pci subtractive_decode bus_master resources: ioport:e000(size=4096) memory:fbf00000-fbffffff *-network description: Ethernet interface product: 82541PI Gigabit Ethernet Controller vendor: Intel Corporation physical id: 5 bus info: pci@0000:05:05.0 logical name: eth1 version: 05 serial: 00:1b:21:56:f3:60 size: 100MB/s capacity: 1GB/s width: 32 bits clock: 66MHz capabilities: pm pcix bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k6-NAPI duplex=full firmware=N/A ip=192.168.1.2 latency=64 link=yes mingnt=255 multicast=yes port=twisted pair speed=100MB/s resources: irq:20 memory:fbfe0000-fbffffff memory:fbfc0000-fbfdffff ioport:ec00(size=64) memory:fbfa0000-fbfbffff *-usb:6 description: USB Controller product: SB700/SB800 USB OHCI2 Controller vendor: ATI Technologies Inc physical id: 14.5 bus info: pci@0000:00:14.5 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:18 memory:f7ffa000-f7ffafff *-pci:1 description: Host bridge product: Family 10h Processor HyperTransport Configuration vendor: Advanced Micro Devices [AMD] physical id: 101 bus info: pci@0000:00:18.0 version: 00 width: 32 bits clock: 33MHz *-pci:2 description: Host bridge product: Family 10h Processor Address Map vendor: Advanced Micro Devices [AMD] physical id: 102 bus info: pci@0000:00:18.1 version: 00 width: 32 bits clock: 33MHz *-pci:3 description: Host bridge product: Family 10h Processor DRAM Controller vendor: Advanced Micro Devices [AMD] physical id: 103 bus info: pci@0000:00:18.2 version: 00 width: 32 bits clock: 33MHz *-pci:4 description: Host bridge product: Family 10h Processor Miscellaneous Control vendor: Advanced Micro Devices [AMD] physical id: 104 bus info: pci@0000:00:18.3 version: 00 width: 32 bits clock: 33MHz configuration: driver=k10temp resources: irq:0 *-pci:5 description: Host bridge product: Family 10h Processor Link Control vendor: Advanced Micro Devices [AMD] physical id: 105 bus info: pci@0000:00:18.4 version: 00 width: 32 bits clock: 33MHz *-scsi physical id: 1 bus info: usb@2:3 logical name: scsi8 capabilities: emulated scsi-host configuration: driver=usb-storage *-disk:0 description: SCSI Disk physical id: 0.0.0 bus info: scsi@8:0.0.0 logical name: /dev/sdc *-disk:1 description: SCSI Disk physical id: 0.0.1 bus info: scsi@8:0.0.1 logical name: /dev/sdd *-disk:2 description: SCSI Disk physical id: 0.0.2 bus info: scsi@8:0.0.2 logical name: /dev/sde *-disk:3 description: SCSI Disk physical id: 0.0.3 bus info: scsi@8:0.0.3 logical name: /dev/sdf *-network DISABLED description: Ethernet interface physical id: 1 logical name: vboxnet0 serial: 0a:00:27:00:00:00 capabilities: ethernet physical configuration: broadcast=yes multicast=yes

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • Inbound SIP calls through Cisco 881 NAT hang up after a few seconds

    - by MasterRoot24
    I've recently moved to a Cisco 881 router for my WAN link. I was previously using a Cisco Linksys WAG320N as my modem/router/WiFi AP/NAT firewall. The WAG320N is now running in bridged mode, so it's simply acting as a modem with one of it's LAN ports connected to FE4 WAN on my Cisco 881. The Cisco 881 get's a DHCP provided IP from my ISP. My LAN is part of default Vlan 1 (192.168.1.0/24). General internet connectivity is working great, I've managed to setup static NAT rules for my HTTP/HTTPS/SMTP/etc. services which are running on my LAN. I don't know whether it's worth mentioning that I've opted to use NVI NAT (ip nat enable as opposed to the traditional ip nat outside/ip nat inside) setup. My reason for this is that NVI allows NAT loopback from my LAN to the WAN IP and back in to the necessary server on the LAN. I run an Asterisk 1.8 PBX on my LAN, which connects to a SIP provider on the internet. Both inbound and outbound calls through the old setup (WAG320N providing routing/NAT) worked fine. However, since moving to the Cisco 881, inbound calls drop after around 10 seconds, whereas outbound calls work fine. The following message is logged on my Asterisk PBX: [Dec 9 15:27:45] WARNING[27734]: chan_sip.c:3641 retrans_pkt: Retransmission timeout reached on transmission [email protected] for seqno 1 (Critical Response) -- See https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions Packet timed out after 6528ms with no response [Dec 9 15:27:45] WARNING[27734]: chan_sip.c:3670 retrans_pkt: Hanging up call [email protected] - no reply to our critical packet (see https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions). (I know that this is quite a common issue - I've spend the best part of 2 days solid on this, trawling Google.) I've done as I am told and checked https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions. Referring to the section "Other SIP requests" in the page linked above, I believe that the hangup to be caused by the ACK from my SIP provider not being passed back through NAT to Asterisk on my PBX. I tried to ascertain this by dumping the packets on my WAN interface on the 881. I managed to obtain a PCAP dump of packets in/out of my WAN interface. Here's an example of an ACK being reveived by the router from my provider: 689 21.219999 193.x.x.x 188.x.x.x SIP 502 Request: ACK sip:[email protected] | However a SIP trace on the Asterisk server show's that there are no ACK's received in response to the 200 OK from my PBX: http://pastebin.com/wwHpLPPz In the past, I have been strongly advised to disable any sort of SIP ALGs on routers and/or firewalls and the many posts regarding this issue on the internet seem to support this. However, I believe on Cisco IOS, the config command to disable SIP ALG is no ip nat service sip udp port 5060 however, this doesn't appear to help the situation. To confirm that config setting is set: Router1#show running-config | include sip no ip nat service sip udp port 5060 Another interesting twist: for a short period of time, I tried another provider. Luckily, my trial account with them is still available, so I reverted my Asterisk config back to the revision before I integrated with my current provider. I then dialled in to the DDI associated with the trial trunk and the call didn't get hung up and I didn't get the error above! To me, this points at the provider, however I know, like all providers do, will say "There's no issues with our SIP proxies - it's your firewall." I'm tempted to agree with this, as this issue was not apparent with the old WAG320N router when it was doing the NAT'ing. I'm sure you'll want to see my running-config too: ! ! Last configuration change at 15:55:07 UTC Sun Dec 9 2012 by xxx version 15.2 no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug datetime msec localtime show-timezone service timestamps log datetime msec localtime show-timezone no service password-encryption service sequence-numbers ! hostname Router1 ! boot-start-marker boot-end-marker ! ! security authentication failure rate 10 log security passwords min-length 6 logging buffered 4096 logging console critical enable secret 4 xxx ! aaa new-model ! ! aaa authentication login local_auth local ! ! ! ! ! aaa session-id common ! memory-size iomem 10 ! crypto pki trustpoint TP-self-signed-xxx enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-xxx revocation-check none rsakeypair TP-self-signed-xxx ! ! crypto pki certificate chain TP-self-signed-xxx certificate self-signed 01 quit no ip source-route no ip gratuitous-arps ip auth-proxy max-login-attempts 5 ip admission max-login-attempts 5 ! ! ! ! ! no ip bootp server ip domain name dmz.merlin.local ip domain list dmz.merlin.local ip domain list merlin.local ip name-server x.x.x.x ip inspect audit-trail ip inspect udp idle-time 1800 ip inspect dns-timeout 7 ip inspect tcp idle-time 14400 ip inspect name autosec_inspect ftp timeout 3600 ip inspect name autosec_inspect http timeout 3600 ip inspect name autosec_inspect rcmd timeout 3600 ip inspect name autosec_inspect realaudio timeout 3600 ip inspect name autosec_inspect smtp timeout 3600 ip inspect name autosec_inspect tftp timeout 30 ip inspect name autosec_inspect udp timeout 15 ip inspect name autosec_inspect tcp timeout 3600 ip cef login block-for 3 attempts 3 within 3 no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO881-SEC-K9 sn ! ! username xxx privilege 15 secret 4 xxx username xxx secret 4 xxx ! ! ! ! ! ip ssh time-out 60 ! ! ! ! ! ! ! ! ! interface FastEthernet0 no ip address ! interface FastEthernet1 no ip address ! interface FastEthernet2 no ip address ! interface FastEthernet3 switchport access vlan 2 no ip address ! interface FastEthernet4 ip address dhcp no ip redirects no ip unreachables no ip proxy-arp ip nat enable duplex auto speed auto ! interface Vlan1 ip address 192.168.1.1 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip nat enable ! interface Vlan2 ip address 192.168.0.2 255.255.255.0 ! ip forward-protocol nd ip http server ip http access-class 1 ip http authentication local ip http secure-server ip http timeout-policy idle 60 life 86400 requests 10000 ! ! no ip nat service sip udp port 5060 ip nat source list 1 interface FastEthernet4 overload ip nat source static tcp x.x.x.x 80 interface FastEthernet4 80 ip nat source static tcp x.x.x.x 443 interface FastEthernet4 443 ip nat source static tcp x.x.x.x 25 interface FastEthernet4 25 ip nat source static tcp x.x.x.x 587 interface FastEthernet4 587 ip nat source static tcp x.x.x.x 143 interface FastEthernet4 143 ip nat source static tcp x.x.x.x 993 interface FastEthernet4 993 ip nat source static tcp x.x.x.x 1723 interface FastEthernet4 1723 ! ! logging trap debugging logging facility local2 access-list 1 permit 192.168.1.0 0.0.0.255 access-list 1 permit 192.168.0.0 0.0.0.255 no cdp run ! ! ! ! control-plane ! ! banner motd Authorized Access only ! line con 0 login authentication local_auth length 0 transport output all line aux 0 exec-timeout 15 0 login authentication local_auth transport output all line vty 0 1 access-class 1 in logging synchronous login authentication local_auth length 0 transport preferred none transport input telnet transport output all line vty 2 4 access-class 1 in login authentication local_auth length 0 transport input ssh transport output all ! ! end ...and, if it's of any use, here's my Asterisk SIP config: [general] context=default ; Default context for calls allowoverlap=no ; Disable overlap dialing support. (Default is yes) udpbindaddr=0.0.0.0 ; IP address to bind UDP listen socket to (0.0.0.0 binds to all) ; Optionally add a port number, 192.168.1.1:5062 (default is port 5060) tcpenable=no ; Enable server for incoming TCP connections (default is no) tcpbindaddr=0.0.0.0 ; IP address for TCP server to bind to (0.0.0.0 binds to all interfaces) ; Optionally add a port number, 192.168.1.1:5062 (default is port 5060) srvlookup=yes ; Enable DNS SRV lookups on outbound calls ; Note: Asterisk only uses the first host ; in SRV records ; Disabling DNS SRV lookups disables the ; ability to place SIP calls based on domain ; names to some other SIP users on the Internet ; Specifying a port in a SIP peer definition or ; when dialing outbound calls will supress SRV ; lookups for that peer or call. directmedia=no ; Don't allow direct RTP media between extensions (doesn't work through NAT) externhost=<MY DYNDNS HOSTNAME> ; Our external hostname to resolve to IP and be used in NAT'ed packets localnet=192.168.1.0/24 ; Define our local network so we know which packets need NAT'ing qualify=yes ; Qualify peers by default dtmfmode=rfc2833 ; Set the default DTMF mode disallow=all ; Disallow all codecs by default allow=ulaw ; Allow G.711 u-law allow=alaw ; Allow G.711 a-law ; ---------------------- ; SIP Trunk Registration ; ---------------------- ; Orbtalk register => <MY SIP PROVIDER USER NAME>:[email protected]/<MY DDI> ; Main Orbtalk number ; ---------- ; Trunks ; ---------- [orbtalk] ; Main Orbtalk trunk type=peer insecure=invite host=sipgw3.orbtalk.co.uk nat=yes username=<MY SIP PROVIDER USER NAME> defaultuser=<MY SIP PROVIDER USER NAME> fromuser=<MY SIP PROVIDER USER NAME> secret=xxx context=inbound I really don't know where to go with this. If anyone can help me find out why these calls are being dropped off, I'd be grateful if you could chime in! Please let me know if any further info is required.

    Read the article

  • Unusually high dentry cache usage

    - by Wolfgang Stengel
    Problem A CentOS machine with kernel 2.6.32 and 128 GB physical RAM ran into trouble a few days ago. The responsible system administrator tells me that the PHP-FPM application was not responding to requests in a timely manner anymore due to swapping, and having seen in free that almost no memory was left, he chose to reboot the machine. I know that free memory can be a confusing concept on Linux and a reboot perhaps was the wrong thing to do. However, the mentioned administrator blames the PHP application (which I am responsible for) and refuses to investigate further. What I could find out on my own is this: Before the restart, the free memory (incl. buffers and cache) was only a couple of hundred MB. Before the restart, /proc/meminfo reported a Slab memory usage of around 90 GB (yes, GB). After the restart, the free memory was 119 GB, going down to around 100 GB within an hour, as the PHP-FPM workers (about 600 of them) were coming back to life, each of them showing between 30 and 40 MB in the RES column in top (which has been this way for months and is perfectly reasonable given the nature of the PHP application). There is nothing else in the process list that consumes an unusual or noteworthy amount of RAM. After the restart, Slab memory was around 300 MB If have been monitoring the system ever since, and most notably the Slab memory is increasing in a straight line with a rate of about 5 GB per day. Free memory as reported by free and /proc/meminfo decreases at the same rate. Slab is currently at 46 GB. According to slabtop most of it is used for dentry entries: Free memory: free -m total used free shared buffers cached Mem: 129048 76435 52612 0 144 7675 -/+ buffers/cache: 68615 60432 Swap: 8191 0 8191 Meminfo: cat /proc/meminfo MemTotal: 132145324 kB MemFree: 53620068 kB Buffers: 147760 kB Cached: 8239072 kB SwapCached: 0 kB Active: 20300940 kB Inactive: 6512716 kB Active(anon): 18408460 kB Inactive(anon): 24736 kB Active(file): 1892480 kB Inactive(file): 6487980 kB Unevictable: 8608 kB Mlocked: 8608 kB SwapTotal: 8388600 kB SwapFree: 8388600 kB Dirty: 11416 kB Writeback: 0 kB AnonPages: 18436224 kB Mapped: 94536 kB Shmem: 6364 kB Slab: 46240380 kB SReclaimable: 44561644 kB SUnreclaim: 1678736 kB KernelStack: 9336 kB PageTables: 457516 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 72364108 kB Committed_AS: 22305444 kB VmallocTotal: 34359738367 kB VmallocUsed: 480164 kB VmallocChunk: 34290830848 kB HardwareCorrupted: 0 kB AnonHugePages: 12216320 kB HugePages_Total: 2048 HugePages_Free: 2048 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 5604 kB DirectMap2M: 2078720 kB DirectMap1G: 132120576 kB Slabtop: slabtop --once Active / Total Objects (% used) : 225920064 / 226193412 (99.9%) Active / Total Slabs (% used) : 11556364 / 11556415 (100.0%) Active / Total Caches (% used) : 110 / 194 (56.7%) Active / Total Size (% used) : 43278793.73K / 43315465.42K (99.9%) Minimum / Average / Maximum Object : 0.02K / 0.19K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 221416340 221416039 3% 0.19K 11070817 20 44283268K dentry 1123443 1122739 99% 0.41K 124827 9 499308K fuse_request 1122320 1122180 99% 0.75K 224464 5 897856K fuse_inode 761539 754272 99% 0.20K 40081 19 160324K vm_area_struct 437858 223259 50% 0.10K 11834 37 47336K buffer_head 353353 347519 98% 0.05K 4589 77 18356K anon_vma_chain 325090 324190 99% 0.06K 5510 59 22040K size-64 146272 145422 99% 0.03K 1306 112 5224K size-32 137625 137614 99% 1.02K 45875 3 183500K nfs_inode_cache 128800 118407 91% 0.04K 1400 92 5600K anon_vma 59101 46853 79% 0.55K 8443 7 33772K radix_tree_node 52620 52009 98% 0.12K 1754 30 7016K size-128 19359 19253 99% 0.14K 717 27 2868K sysfs_dir_cache 10240 7746 75% 0.19K 512 20 2048K filp VFS cache pressure: cat /proc/sys/vm/vfs_cache_pressure 125 Swappiness: cat /proc/sys/vm/swappiness 0 I know that unused memory is wasted memory, so this should not necessarily be a bad thing (especially given that 44 GB are shown as SReclaimable). However, apparently the machine experienced problems nonetheless, and I'm afraid the same will happen again in a few days when Slab surpasses 90 GB. Questions I have these questions: Am I correct in thinking that the Slab memory is always physical RAM, and the number is already subtracted from the MemFree value? Is such a high number of dentry entries normal? The PHP application has access to around 1.5 M files, however most of them are archives and not being accessed at all for regular web traffic. What could be an explanation for the fact that the number of cached inodes is much lower than the number of cached dentries, should they not be related somehow? If the system runs into memory trouble, should the kernel not free some of the dentries automatically? What could be a reason that this does not happen? Is there any way to "look into" the dentry cache to see what all this memory is (i.e. what are the paths that are being cached)? Perhaps this points to some kind of memory leak, symlink loop, or indeed to something the PHP application is doing wrong. The PHP application code as well as all asset files are mounted via GlusterFS network file system, could that have something to do with it? Please keep in mind that I can not investigate as root, only as a regular user, and that the administrator refuses to help. He won't even run the typical echo 2 > /proc/sys/vm/drop_caches test to see if the Slab memory is indeed reclaimable. Any insights into what could be going on and how I can investigate any further would be greatly appreciated. Updates Some further diagnostic information: Mounts: cat /proc/self/mounts rootfs / rootfs rw 0 0 proc /proc proc rw,relatime 0 0 sysfs /sys sysfs rw,relatime 0 0 devtmpfs /dev devtmpfs rw,relatime,size=66063000k,nr_inodes=16515750,mode=755 0 0 devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /dev/shm tmpfs rw,relatime 0 0 /dev/mapper/sysvg-lv_root / ext4 rw,relatime,barrier=1,data=ordered 0 0 /proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0 /dev/sda1 /boot ext4 rw,relatime,barrier=1,data=ordered 0 0 tmpfs /phptmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0 tmpfs /wsdltmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0 cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0 cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0 cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0 cgroup /cgroup/memory cgroup rw,relatime,memory 0 0 cgroup /cgroup/devices cgroup rw,relatime,devices 0 0 cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0 cgroup /cgroup/net_cls cgroup rw,relatime,net_cls 0 0 cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0 /etc/glusterfs/glusterfs-www.vol /var/www fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 /etc/glusterfs/glusterfs-upload.vol /var/upload fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0 172.17.39.78:/www /data/www nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 0 0 Mount info: cat /proc/self/mountinfo 16 21 0:3 / /proc rw,relatime - proc proc rw 17 21 0:0 / /sys rw,relatime - sysfs sysfs rw 18 21 0:5 / /dev rw,relatime - devtmpfs devtmpfs rw,size=66063000k,nr_inodes=16515750,mode=755 19 18 0:11 / /dev/pts rw,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000 20 18 0:16 / /dev/shm rw,relatime - tmpfs tmpfs rw 21 1 253:1 / / rw,relatime - ext4 /dev/mapper/sysvg-lv_root rw,barrier=1,data=ordered 22 16 0:15 / /proc/bus/usb rw,relatime - usbfs /proc/bus/usb rw 23 21 8:1 / /boot rw,relatime - ext4 /dev/sda1 rw,barrier=1,data=ordered 24 21 0:17 / /phptmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=777 25 21 0:18 / /wsdltmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=777 26 16 0:19 / /proc/sys/fs/binfmt_misc rw,relatime - binfmt_misc none rw 27 21 0:20 / /cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset 28 21 0:21 / /cgroup/cpu rw,relatime - cgroup cgroup rw,cpu 29 21 0:22 / /cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct 30 21 0:23 / /cgroup/memory rw,relatime - cgroup cgroup rw,memory 31 21 0:24 / /cgroup/devices rw,relatime - cgroup cgroup rw,devices 32 21 0:25 / /cgroup/freezer rw,relatime - cgroup cgroup rw,freezer 33 21 0:26 / /cgroup/net_cls rw,relatime - cgroup cgroup rw,net_cls 34 21 0:27 / /cgroup/blkio rw,relatime - cgroup cgroup rw,blkio 35 21 0:28 / /var/www rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-www.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 36 21 0:29 / /var/upload rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-upload.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 37 21 0:30 / /var/lib/nfs/rpc_pipefs rw,relatime - rpc_pipefs sunrpc rw 39 21 0:31 / /data/www rw,relatime - nfs 172.17.39.78:/www rw,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 GlusterFS config: cat /etc/glusterfs/glusterfs-www.vol volume remote1 type protocol/client option transport-type tcp option remote-host 172.17.39.71 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote2 type protocol/client option transport-type tcp option remote-host 172.17.39.72 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote3 type protocol/client option transport-type tcp option remote-host 172.17.39.73 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote4 type protocol/client option transport-type tcp option remote-host 172.17.39.74 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume replicate1 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote1 remote2 end-volume volume replicate2 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote3 remote4 end-volume volume distribute type cluster/distribute subvolumes replicate1 replicate2 end-volume volume iocache type performance/io-cache option cache-size 8192MB # default is 32MB subvolumes distribute end-volume volume writeback type performance/write-behind option cache-size 1024MB option window-size 1MB subvolumes iocache end-volume ### Add io-threads for parallel requisitions volume iothreads type performance/io-threads option thread-count 64 # default is 16 subvolumes writeback end-volume volume ra type performance/read-ahead option page-size 2MB option page-count 16 option force-atime-update no subvolumes iothreads end-volume

    Read the article

  • Valgrind says "stack allocation," I say "heap allocation"

    - by Joel J. Adamson
    Dear Friends, I am trying to trace a segfault with valgrind. I get the following message from valgrind: ==3683== Conditional jump or move depends on uninitialised value(s) ==3683== at 0x4C277C5: sparse_mat_mat_kron (sparse.c:165) ==3683== by 0x4C2706E: rec_mating (rec.c:176) ==3683== by 0x401C1C: age_dep_iterate (age_dep.c:287) ==3683== by 0x4014CB: main (age_dep.c:92) ==3683== Uninitialised value was created by a stack allocation ==3683== at 0x401848: age_dep_init_params (age_dep.c:131) ==3683== ==3683== Conditional jump or move depends on uninitialised value(s) ==3683== at 0x4C277C7: sparse_mat_mat_kron (sparse.c:165) ==3683== by 0x4C2706E: rec_mating (rec.c:176) ==3683== by 0x401C1C: age_dep_iterate (age_dep.c:287) ==3683== by 0x4014CB: main (age_dep.c:92) ==3683== Uninitialised value was created by a stack allocation ==3683== at 0x401848: age_dep_init_params (age_dep.c:131) However, here's the offending line: /* allocate mating table */ age_dep_data->mtable = malloc (age_dep_data->geno * sizeof (double *)); if (age_dep_data->mtable == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); for (int j = 0; j < age_dep_data->geno; j++) { 131=> age_dep_data->mtable[j] = calloc (age_dep_data->geno, sizeof (double)); if (age_dep_data->mtable[j] == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); } What gives? I thought any call to malloc or calloc allocated heap space; there is no other variable allocated here, right? Is it possible there's another allocation going on (the offending stack allocation) that I'm not seeing? You asked to see the code, here goes: /* Copyright 2010 Joel J. Adamson <[email protected]> $Id: age_dep.c 1010 2010-04-21 19:19:16Z joel $ age_dep.c:main file Joel J. Adamson -- http://www.unc.edu/~adamsonj Servedio Lab University of North Carolina at Chapel Hill CB #3280, Coker Hall Chapel Hill, NC 27599-3280 This file is part of an investigation of age-dependent sexual selection. This code is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with haploid. If not, see <http://www.gnu.org/licenses/>. */ #include "age_dep.h" /* global variables */ extern struct argp age_dep_argp; /* global error message variables */ char * nullmsg = "Null pointer: %i"; /* error message for conversions: */ char * errmsg = "Representation error: %s"; /* precision for formatted output: */ const char prec[] = "%-#9.8f "; const size_t age_max = AGEMAX; /* maximum age of males */ static int keep_going_p = 1; int main (int argc, char ** argv) { /* often used counters: */ int i, j; /* read the command line */ struct age_dep_args age_dep_args = { NULL, NULL, NULL }; argp_parse (&age_dep_argp, argc, argv, 0, 0, &age_dep_args); /* set the parameters here: */ /* initialize an age_dep_params structure, set the members */ age_dep_params_t * params = malloc (sizeof (age_dep_params_t)); if (params == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); age_dep_init_params (params, &age_dep_args); /* initialize frequencies: this initializes a list of pointers to initial frqeuencies, terminated by a NULL pointer*/ params->freqs = age_dep_init (&age_dep_args); params->by = 0.0; /* what range of parameters do we want, and with what stepsize? */ /* we should go from 0 to half-of-theta with a step size of about 0.01 */ double from = 0.0; double to = params->theta / 2.0; double stepsz = 0.01; /* did you think I would spell the whole word? */ unsigned int numparts = floor(to / stepsz); do { #pragma omp parallel for private(i) firstprivate(params) \ shared(stepsz, numparts) for (i = 0; i < numparts; i++) { params->by = i * stepsz; int tries = 0; while (keep_going_p) { /* each time through, modify mfreqs and mating table, then go again */ keep_going_p = age_dep_iterate (params, ++tries); if (keep_going_p == ERANGE) error (ERANGE, ERANGE, "Failure to converge\n"); } fprintf (stdout, "%i iterations\n", tries); } /* for i < numparts */ params->freqs = params->freqs->next; } while (params->freqs->next != NULL); return 0; } inline double age_dep_pmate (double age_dep_t, unsigned int genot, double bp, double ba) { /* the probability of mating between these phenotypes */ /* the female preference depends on whether the female has the preference allele, the strength of preference (parameter bp) and the male phenotype (age_dep_t); if the female lacks the preference allele, then this will return 0, which is not quite accurate; it should return 1 */ return bits_isset (genot, CLOCI)? 1.0 - exp (-bp * age_dep_t) + ba: 1.0; } inline double age_dep_trait (int age, unsigned int genot, double by) { /* return the male trait, a function of the trait locus, age, the age-dependent scaling parameter (bx) and the males condition genotype */ double C; double T; /* get the male's condition genotype */ C = (double) bits_popcount (bits_extract (0, CLOCI, genot)); /* get his trait genotype */ T = bits_isset (genot, CLOCI + 1)? 1.0: 0.0; /* return the trait value */ return T * by * exp (age * C); } int age_dep_iterate (age_dep_params_t * data, unsigned int tries) { /* main driver routine */ /* number of bytes for female frequencies */ size_t geno = data->age_dep_data->geno; size_t genosize = geno * sizeof (double); /* female frequencies are equal to male frequencies at birth (before selection) */ double ffreqs[geno]; if (ffreqs == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); /* do not set! Use memcpy (we need to alter male frequencies (selection) without altering female frequencies) */ memmove (ffreqs, data->freqs->freqs[0], genosize); /* for (int i = 0; i < geno; i++) */ /* ffreqs[i] = data->freqs->freqs[0][i]; */ #ifdef PRMTABLE age_dep_pr_mfreqs (data); #endif /* PRMTABLE */ /* natural selection: */ age_dep_ns (data); /* normalized mating table with new frequencies */ age_dep_norm_mtable (ffreqs, data); #ifdef PRMTABLE age_dep_pr_mtable (data); #endif /* PRMTABLE */ double * newfreqs; /* mutate here */ /* i.e. get the new frequency of 0-year-olds using recombination; */ newfreqs = rec_mating (data->age_dep_data); /* return block */ { if (sim_stop_ck (data->freqs->freqs[0], newfreqs, GENO, TOL) == 0) { /* if we have converged, stop the iterations and handle the data */ age_dep_sim_out (data, stdout); return 0; } else if (tries > MAXTRIES) return ERANGE; else { /* advance generations */ for (int j = age_max - 1; j < 0; j--) memmove (data->freqs->freqs[j], data->freqs->freqs[j-1], genosize); /* advance the first age-class */ memmove (data->freqs->freqs[0], newfreqs, genosize); return 1; } } } void age_dep_ns (age_dep_params_t * data) { /* calculate the new frequency of genotypes given additive fitness and selection coefficient s */ size_t geno = data->age_dep_data->geno; double w[geno]; double wbar, dtheta, ttheta, dcond, tcond; double t, cond; /* fitness parameters */ double mu, nu; mu = data->wparams[0]; nu = data->wparams[1]; /* calculate fitness */ for (int j = 0; j < age_max; j++) { int i; for (i = 0; i < geno; i++) { /* calculate male trait: */ t = age_dep_trait(j, i, data->by); /* calculate condition: */ cond = (double) bits_popcount (bits_extract(0, CLOCI, i)); /* trait-based fitness term */ dtheta = data->theta - t; ttheta = (dtheta * dtheta) / (2.0 * nu * nu); /* condition-based fitness term */ dcond = CLOCI - cond; tcond = (dcond * dcond) / (2.0 * mu * mu); /* calculate male fitness */ w[i] = 1 + exp(-tcond) - exp(-ttheta); } /* calculate mean fitness */ /* as long as we calculate wbar before altering any values of freqs[], we're safe */ wbar = gen_mean (data->freqs->freqs[j], w, geno); for (i = 0; i < geno; i++) data->freqs->freqs[j][i] = (data->freqs->freqs[j][i] * w[i]) / wbar; } } void age_dep_norm_mtable (double * ffreqs, age_dep_params_t * params) { /* this function produces a single mating table that forms the input for recombination () */ /* i is female genotype; j is male genotype; k is male age */ int i,j,k; double norm_denom; double trait; size_t geno = params->age_dep_data->geno; for (i = 0; i < geno; i++) { double norm_mtable[geno]; /* initialize the denominator: */ norm_denom = 0.0; /* find the probability of mating and add it to the denominator */ for (j = 0; j < geno; j++) { /* initialize entry: */ norm_mtable[j] = 0.0; for (k = 0; k < age_max; k++) { trait = age_dep_trait (k, j, params->by); norm_mtable[j] += age_dep_pmate (trait, i, params->bp, params->ba) * (params->freqs->freqs)[k][j]; } norm_denom += norm_mtable[j]; } /* now calculate entry (i,j) */ for (j = 0; j < geno; j++) params->age_dep_data->mtable[i][j] = (ffreqs[i] * norm_mtable[j]) / norm_denom; } } My current suspicion is the array newfreqs: I can't memmove, memcpy or assign a stack variable then hope it will persist, can I? rec_mating() returns double *.

    Read the article

  • Dealing with external processes

    - by Jesse Aldridge
    I've been working on a gui app that needs to manage external processes. Working with external processes leads to a lot of issues that can make a programmer's life difficult. I feel like maintenence on this app is taking an unacceptably long time. I've been trying to list the things that make working with external processes difficult so that I can come up with ways of mitigating the pain. This kind of turned into a rant which I thought I'd post here in order to get some feedback and to provide some guidance to anybody thinking about sailing into these very murky waters. Here's what I've got so far: Output from the child can get mixed up with output from the parent. This can make both outputs misleading and hard to read. It can be hard to tell what came from where. It becomes harder to figure out what's going on when things are asynchronous. Here's a contrived example: import textwrap, os, time from subprocess import Popen test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) proc = Popen('python -B "%s"' % test_path) for i in range(3): print 'Hello %i' % i time.sleep(1) os.remove(test_path) I guess I could have the child process write its output to a file. But it can be annoying to have to open up a file every time I want to see the result of a print statement. If I have code for the child process I could add a label, something like print 'child: Hello %i', but it can be annoying to do that for every print. And it adds some noise to the output. And of course I can't do it if I don't have access to the code. I could manually manage the process output. But then you open up a huge can of worms with threads and polling and stuff like that. A simple solution is to treat processes like synchronous functions, that is, no further code executes until the process completes. In other words, make the process block. But that doesn't work if you're building a gui app. Which brings me to the next problem... Blocking processes cause the gui to become unresponsive. import textwrap, sys, os from subprocess import Popen from PyQt4.QtGui import * from PyQt4.QtCore import * test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) app = QApplication(sys.argv) button = QPushButton('Launch process') def launch_proc(): # Can't move the window until process completes proc = Popen('python -B "%s"' % test_path) proc.communicate() button.connect(button, SIGNAL('clicked()'), launch_proc) button.show() app.exec_() os.remove(test_path) Qt provides a process wrapper of its own called QProcess which can help with this. You can connect functions to signals to capture output relatively easily. This is what I'm currently using. But I'm finding that all these signals behave suspiciously like goto statements and can lead to spaghetti code. I think I want to get sort-of blocking behavior by having the 'finished' signal from QProcess call a function containing all the code that comes after the process call. I think that should work but I'm still a bit fuzzy on the details... Stack traces get interrupted when you go from the child process back to the parent process. If a normal function screws up, you get a nice complete stack trace with filenames and line numbers. If a subprocess screws up, you'll be lucky if you get any output at all. You end up having to do a lot more detective work everytime something goes wrong. Speaking of which, output has a way of disappearing when dealing external processes. Like if you run something via the windows 'cmd' command, the console will pop up, execute the code, and then disappear before you have a chance to see the output. You have to pass the /k flag to make it stick around. Similar issues seem to crop up all the time. I suppose both problems 3 and 4 have the same root cause: no exception handling. Exception handling is meant to be used with functions, it doesn't work with processes. Maybe there's some way to get something like exception handling for processes? I guess that's what stderr is for? But dealing with two different streams can be annoying in itself. Maybe I should look into this more... Processes can hang and stick around in the background without you realizing it. So you end up yelling at your computer cuz it's going so slow until you finally bring up your task manager and see 30 instances of the same process hanging out in the background. Also, hanging background processes can interefere with other instances of the process in various fun ways, such as causing permissions errors by holding a handle to a file or someting like that. It seems like an easy solution to this would be to have the parent process kill the child process on exit if the child process didn't close itself. But if the parent process crashes, cleanup code might not get called and the child can be left hanging. Also, if the parent waits for the child to complete, and the child is in an infinite loop or something, you can end up with two hanging processes. This problem can tie in to problem 2 for extra fun, causing your gui to stop responding entirely and force you to kill everything with the task manager. F***ing quotes Parameters often need to be passed to processes. This is a headache in itself. Especially if you're dealing with file paths. Say... 'C:/My Documents/whatever/'. If you don't have quotes, the string will often be split at the space and interpreted as two arguments. If you need nested quotes you can use ' and ". But if you need to use more than two layers of quotes, you have to do some nasty escaping, for example: "cmd /k 'python \'path 1\' \'path 2\''". A good solution to this problem is passing parameters as a list rather than as a single string. Subprocess allows you to do this. Can't easily return data from a subprocess. You can use stdout of course. But what if you want to throw a print in there for debugging purposes? That's gonna screw up the parent if it's expecting output formatted a certain way. In functions you can print one string and return another and everything works just fine. Obscure command-line flags and a crappy terminal based help system. These are problems I often run into when using os level apps. Like the /k flag I mentioned, for holding a cmd window open, who's idea was that? Unix apps don't tend to be much friendlier in this regard. Hopefully you can use google or StackOverflow to find the answer you need. But if not, you've got a lot of boring reading and frusterating trial and error to do. External factors. This one's kind of fuzzy. But when you leave the relatively sheltered harbor of your own scripts to deal with external processes you find yourself having to deal with the "outside world" to a much greater extent. And that's a scary place. All sorts of things can go wrong. Just to give a random example: the cwd in which a process is run can modify it's behavior. There are probably other issues, but those are the ones I've written down so far. Any other snags you'd like to add? Any suggestions for dealing with these problems?

    Read the article

  • How to find an entry-level job after you already have a graduate degree?

    - by Uri
    Note: I asked this question in early 2009. A couple of months later, I found a great job. I've previously updated this question with some tips for whoever ends up in a similar situation, and now cleaned it up a little for the benefit of the fresh batch of graduates. Original post: In my early 20s I abandoned a great C++ development career path in a major company to go to graduate school and get a research masters (3 years). I did another year in industrial research, and then moved to the US to attend graduate school again, getting another masters and a Ph.D in software engineering from a top school (another 6 years down the drain). I was coding the whole way throughout my degrees (core Java and Eclipse plug-ins) and working on research related to software engineering (usability of APIs). I ended up graduating the year of the recession, with a son on the way and the prospects of no healthcare. Academic jobs and industrial research jobs are quite scarce. Initially, I was naive, thinking that with my background, I could easily find a coding job. Big mistake. It turns out that I'm in a complicated position. Entry level positions are usually offered to college undergraduates. I attended my school's career fairs, but you could immediately see signs of Ph.D. aversion and overqualification issues. Some of the recruiters I spoke with explicitly told me that they wanted 20 year olds with clean slates, and some were looking for interns since they are in various forms of hiring freezes. I managed to get a couple of interviews from these career fairs and through recruiters. However, since I've been out of school for a long time and programming primarily in Java, I am also no longer proficient in C/C++ and the usual range of college-level interview questions that everyone uses. I had no problems with this when I was 19 and interviewing for my first job since a lot of what you do in C is manipulate pointers and I was coding C++ for fun and for school. Later I was routinely doing pointer manipulation on the job, and during my first masters taught college courses with data structures and C++. But even though I remember many properties of C++ well, it's been close to ten years since I regularly used C++ and pointers. As a Java developer I rarely had to work at this level, but experience in OOD and in writing good maintainable code is meaningless for C++ interviews. Reading books as a refresh and looking at sample code did not do the trick. I also looked at mid-to-senior level Java positions, but most of them focused on J2EE APIs rather than on core Java and required a certain number of years in industrial positions. Coding research tools and prior C++ experience doesn't count. So that sends me back to entry-level jobs that are posted through job-boards, and these are not common (mostly they are Monster junk), and small companies are even less likely to answer a Ph.D. compared to the giants who participate in top-10 career fairs. Even worse, in many companies initial screening is done by HR folks who really don't want to deal with anything anomalous like a Ph.D. Any tips on how I should approach this intractable position? For example, what should I write in cover letters? Note that while immigration is not an issue for me, I cannot go freelance as I need the benefits (and in particular group health insurance). During my studies I had no time to contribute to open-source projects or maintain a popular blog, so even if I invested in that now there would be no immediate benefit. Updates: In the two months after posting this I received several offers to work as a core Java developer in the financial industry and accepted one from a firm where I am working to this day. For those who find themselves in similar situations, here are my tips: Give up on trying to find an entry level positions. You can't undo time. Accept the fact that there is Ph.D. discrimination in the job market (some might say rightfully so). It is legal to discriminate based on education. No point fighting it. The most important tip is to focus on the language you are comfortable with. The sad truth about programming in a particular language is that it is not like riding a bike. If you haven't used a language in the last few years, and can't actually apply it routinely (not just as a refresher) before you start your search, it is going to be very difficult to do well in an interview. Now that I'm interviewing others, I routinely see it in folks with a mixed C++/Java background. We maintain "a shadow" of the old language but end up with a weird mix that makes it hard to interview on either. Entry-level folks are at an advantage here since they usually have one language. Memory can help you do great in a screening interview, but without recent day-to-day experience, code tests will be difficult. Despite the supposed relation, core Java programming and J2EE programming are two different things with different skillsets. If you come from academia, you likely have very little J2EE experience and may find it hard to get accepted for a J2EE job. J2EE jobs seem to have a larger list of acronyms in their requirements. In addition, from interviewing J2EE developers it seems that for many there is a focus on mastering specific APIs and architectures, whereas core Java development tends to be secondary. In the same way that I can no longer manipulate pointers well, a J2EE developer may have difficulties doing low level Java manipulation. This puts you at a relative advantage in competing for core Java jobs! If you are able to work for startups (in terms of family life and stability) or migrate to startup-rich areas such as the west coast, you can find many exciting opportunities where advanced degrees are a benefit. I've since been approached by several startups, although I had to decline. Work through a recruiter if possible. They have direct contacts with the hiring parties, allowing you to "stand out". It is better to get a clear yes/no confirmation from a recruiter on whether a company might be interested in interviewing you, than it is to send your resume and hope that someone will ever see it. Recruiters are also a great way of bypassing HR. However, also beware of recruiters. They have a vested interest and will go to various shady practices and pressure tactics. To find a good recruiter, talk to a friend who declined a job offer he got through a recruiter. A good recruiter, to me, is measured in how they handle that. Interview for the jobs that require your core strength. If you're rusty or entirely unfamiliar with a technology around which the job revolves, you're probably not a good match. Yes, you probably have the talent to master them, but most companies would want "instant gratification". I got my offers from companies that wanted core Java developer. I didn't do well on places that wanted advance C++ because I am too rusty and not up to date on recent libraries. I also didn't hear from companies that wanted lots of J2EE experience, and that's ok. Finding companies that want core Java without web is harder, but exists in specific industries (e.g., finance, defense). This requires a lot more legwork in terms of search, but these jobs do exist. There are different interview styles. Some companies focus on puzzles, some companies focus on algorithms, and some companies focus on design and coding skills. I had the most success in places where the questions were the most related to the function I would have been performing. Pick companies accordingly as well.

    Read the article

  • Drupal Ctools Form Wizard in a Block

    - by Iamjon
    Hi everyone I created a custom module that has a Ctools multi step form. It's basically a copy of http://www.nicklewis.org/using-chaos-tools-form-wizard-build-multistep-forms-drupal-6. The form works. I can see it if I got to the url i made for it. For the life of me I can't get the multistep form to show up in a block. Any clues? /** * Implementation of hook_block() * */ function mycrazymodule_block($op='list', $delta=0, $edit=array()) { switch ($op) { case 'list': $blocks[0]['info'] = t('SFT Getting Started'); $blocks[1]['info'] = t('SFT Contact US'); $blocks[2]['info'] = t('SFT News Letter'); return $blocks; case 'view': switch ($delta){ case '0': $block['subject'] = t('SFT Getting Started Subject'); $block['content'] = mycrazymodule_wizard(); break; case '1': $block['subject'] = t('SFT Contact US Subject'); $block['content'] = t('SFT Contact US content'); break; case '2': $block['subject'] = t('SFT News Letter Subject'); $block['content'] = t('SFT News Letter cONTENT'); break; } return $block; } } /** * Implementation of hook_menu(). */ function mycrazymodule_menu() { $items['hellocowboy'] = array( 'title' = 'Two Step Form', 'page callback' = 'mycrazymodule_wizard', 'access arguments' = array('access content') ); return $items; } /** * menu callback for the multistep form * step is whatever arg one is -- and will refer to the keys listed in * $form_info['order'], and $form_info['forms'] arrays */ function mycrazymodule_wizard() { $step = arg(1); // required includes for wizard $form_state = array(); ctools_include('wizard'); ctools_include('object-cache'); // The array that will hold the two forms and their options $form_info = array( 'id' = 'getting_started', 'path' = "hellocowboy/%step", 'show trail' = FALSE, 'show back' = FALSE, 'show cancel' = false, 'show return' =false, 'next text' = 'Submit', 'next callback' = 'getting_started_add_subtask_next', 'finish callback' = 'getting_started_add_subtask_finish', 'return callback' = 'getting_started_add_subtask_finish', 'order' = array( 'basic' = t('Step 1: Basic Info'), 'lecture' = t('Step 2: Choose Lecture'), ), 'forms' = array( 'basic' = array( 'form id' = 'basic_info_form' ), 'lecture' = array( 'form id' = 'choose_lecture_form' ), ), ); $form_state = array( 'cache name' = NULL, ); // no matter the step, you will load your values from the callback page $getstart = getting_started_get_page_cache(NULL); if (!$getstart) { // set form to first step -- we have no data $step = current(array_keys($form_info['order'])); $getstart = new stdClass(); //create cache ctools_object_cache_set('getting_started', $form_state['cache name'], $getstart); //print_r($getstart); } //THIS IS WHERE WILL STORE ALL FORM DATA $form_state['getting_started_obj'] = $getstart; // and this is the witchcraft that makes it work $output = ctools_wizard_multistep_form($form_info, $step, $form_state); return $output; } function basic_info_form(&$form, &$form_state){ $getstart = &$form_state['getting_started_obj']; $form['firstname'] = array( '#weight' = '0', '#type' = 'textfield', '#title' = t('firstname'), '#size' = 60, '#maxlength' = 255, '#required' = TRUE, ); $form['lastname'] = array( '#weight' = '1', '#type' = 'textfield', '#title' = t('lastname'), '#required' = TRUE, '#size' = 60, '#maxlength' = 255, ); $form['phone'] = array( '#weight' = '2', '#type' = 'textfield', '#title' = t('phone'), '#required' = TRUE, '#size' = 60, '#maxlength' = 255, ); $form['email'] = array( '#weight' = '3', '#type' = 'textfield', '#title' = t('email'), '#required' = TRUE, '#size' = 60, '#maxlength' = 255, ); $form['newsletter'] = array( '#weight' = '4', '#type' = 'checkbox', '#title' = t('I would like to receive the newsletter'), '#required' = TRUE, '#return_value' = 1, '#default_value' = 1, ); $form_state['no buttons'] = TRUE; } function basic_info_form_validate(&$form, &$form_state){ $email = $form_state['values']['email']; $phone = $form_state['values']['phone']; if(valid_email_address($email) != TRUE){ form_set_error('Dude you have an error', t('Where is your email?')); } //if (strlen($phone) 0 && !ereg('^[0-9]{1,3}-[0-9]{3}-[0-9]{3,4}-[0-9]{3,4}$', $phone)) { //form_set_error('Dude the phone', t('Phone number must be in format xxx-xxx-nnnn-nnnn.')); //} } function basic_info_form_submit(&$form, &$form_state){ //Grab the variables $firstname =check_plain ($form_state['values']['firstname']); $lastname = check_plain ($form_state['values']['lastname']); $email = check_plain ($form_state['values']['email']); $phone = check_plain ($form_state['values']['phone']); $newsletter = $form_state['values']['newsletter']; //Send the form and Grab the lead id $leadid = send_first_form($lastname, $firstname, $email,$phone, $newsletter); //Put into form $form_state['getting_started_obj']-firstname = $firstname; $form_state['getting_started_obj']-lastname = $lastname; $form_state['getting_started_obj']-email = $email; $form_state['getting_started_obj']-phone = $phone; $form_state['getting_started_obj']-newsletter = $newsletter; $form_state['getting_started_obj']-leadid = $leadid; } function choose_lecture_form(&$form, &$form_state){ $one = 'event 1' $two = 'event 2' $three = 'event 3' $getstart = &$form_state['getting_started_obj']; $form['lecture'] = array( '#weight' = '5', '#default_value' = 'two', '#options' = array( 'one' = $one, 'two' = $two, 'three' = $three, ), '#type' = 'radios', '#title' = t('Select Workshop'), '#required' = TRUE, ); $form['attendees'] = array( '#weight' = '6', '#default_value' = 'one', '#options' = array( 'one' = t('I will be arriving alone'), 'two' =t('I will be arriving with a guest'), ), '#type' = 'radios', '#title' = t('Attendees'), '#required' = TRUE, ); $form_state['no buttons'] = TRUE; } /** * Same idea as previous steps submit * */ function choose_lecture_form_submit(&$form, &$form_state) { $workshop = $form_state['values']['lecture']; $leadid = $form_state['getting_started_obj']-leadid; $attendees = $form_state['values']['attendees']; $form_state['getting_started_obj']-lecture = $workshop; $form_state['getting_started_obj']-attendees = $attendees; send_second_form($workshop, $attendees, $leadid); } /*----PART 3 CTOOLS CALLBACKS -- these usually don't have to be very unique ---------------------- */ /** * Callback generated when the add page process is finished. * this is where you'd normally save. */ function getting_started_add_subtask_finish(&$form_state) { dpm($form_state); $getstart = &$form_state['getting_started_obj']; drupal_set_message('mycrazymodule '.$getstart-name.' successfully deployed' ); //Get id // Clear the cache ctools_object_cache_clear('getting_started', $form_state['cache name']); $form_state['redirect'] = 'hellocowboy'; } /** * Callback for the proceed step * */ function getting_started_add_subtask_next(&$form_state) { dpm($form_state); $getstart = &$form_state['getting_started_obj']; $cache = ctools_object_cache_set('getting_started', $form_state['cache name'], $getstart); } /*----PART 4 CTOOLS FORM STORAGE HANDLERS -- these usually don't have to be very unique ---------------------- */ /** * Remove an item from the object cache. */ function getting_started_clear_page_cache($name) { ctools_object_cache_clear('getting_started', $name); } /** * Get the cached changes to a given task handler. */ function getting_started_get_page_cache($name) { $cache = ctools_object_cache_get('getting_started', $name); return $cache; } //Salesforce Functions function send_first_form($lastname, $firstname,$email,$phone, $newsletter){ $send = array("LastName" = $lastname , "FirstName" = $firstname, "Email" = $email ,"Phone" = $phone , "Newsletter__c" =$newsletter ); $sf = salesforce_api_connect(); $response = $sf-client-create(array($send), 'Lead'); dpm($response); return $response-id; } function send_second_form($workshop, $attendees, $leadid){ $send = array("Id" = $leadid , "Number_Of_Pepole__c" = "2" ); $sf = salesforce_api_connect(); $response = $sf-client-update(array($send), 'Lead'); dpm($response, 'the final response'); return $response-id; }

    Read the article

  • Error with my Android Application httpGet

    - by Coombes
    Basically I'm getting a strange issue with my Android application, it's supposed to grab a JSON Array and print out some values, the class looks like this: ShowComedianActivity.class package com.example.connecttest; public class ShowComedianActivity extends Activity{ TextView name; TextView add; TextView email; TextView tel; String id; // Progress Dialog private ProgressDialog pDialog; //JSON Parser class JSONParser jsonParser = new JSONParser(); // Single Comedian url private static final String url_comedian_details = "http://86.9.71.17/connect/get_comedian_details.php"; // JSON Node names private static final String TAG_SUCCESS = "success"; private static final String TAG_COMEDIAN = "comedian"; private static final String TAG_ID = "id"; private static final String TAG_NAME = "name"; private static final String TAG_ADDRESS = "address"; private static final String TAG_EMAIL = "email"; private static final String TAG_TEL = "tel"; public void onCreate(Bundle savedInstanceState){ super.onCreate(savedInstanceState); setContentView(R.layout.show_comedian); // Getting Comedian Details from intent Intent i = getIntent(); // Getting id from intent id = i.getStringExtra(TAG_ID); new GetComedianDetails().execute(); } class GetComedianDetails extends AsyncTask<String, String, String>{ protected void onPreExecute(){ super.onPreExecute(); pDialog = new ProgressDialog(ShowComedianActivity.this); pDialog.setMessage("Fetching Comedian details. Please wait..."); pDialog.setIndeterminate(false); pDialog.setCancelable(true); pDialog.show(); } @Override protected String doInBackground(String... params) { runOnUiThread(new Runnable(){ public void run(){ int success; try{ //Building parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("id",id)); // Getting comedian details via HTTP request // Uses a GET request JSONObject json = jsonParser.makeHttpRequest(url_comedian_details, "GET", params); // Check Log for json response Log.d("Single Comedian details", json.toString()); //JSON Success tag success = json.getInt(TAG_SUCCESS); if(success == 1){ // Succesfully received product details JSONArray comedianObj = json.getJSONArray(TAG_COMEDIAN); //JSON Array // get first comedian object from JSON Array JSONObject comedian = comedianObj.getJSONObject(0); // comedian with id found name = (TextView) findViewById(R.id.name); add = (TextView) findViewById(R.id.add); email = (TextView) findViewById(R.id.email); tel = (TextView) findViewById(R.id.tel); // Set text to details name.setText(comedian.getString(TAG_NAME)); add.setText(comedian.getString(TAG_ADDRESS)); email.setText(comedian.getString(TAG_EMAIL)); tel.setText(comedian.getString(TAG_TEL)); } } catch (JSONException e){ e.printStackTrace(); } } }); return null; } } } And my JSON Parser class looks like: package com.example.connecttest; public class JSONParser { static InputStream is = null; static JSONObject jObj = null; static String json = ""; // constructor public JSONParser() { } // function get json from url // by making HTTP POST or GET method public JSONObject makeHttpRequest(String url, String method, List<NameValuePair> params) { // Making HTTP request try { // check for request method if(method == "POST"){ // request method is POST // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); httpPost.setEntity(new UrlEncodedFormEntity(params)); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); }else if(method == "GET"){ // request method is GET DefaultHttpClient httpClient = new DefaultHttpClient(); String paramString = URLEncodedUtils.format(params, "utf-8"); url += "?" + paramString; HttpGet httpGet = new HttpGet(url); HttpResponse httpResponse = httpClient.execute(httpGet); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; } } Now when I run a debug it's querying the correct address with ?id=1 on the end of the URL, and when I navigate to that url I get the following JSON Array: {"success":1,"comedian":[{"id":"1","name":"Michael Coombes","address":"5 Trevethenick Road","email":"[email protected]","tel":"xxxxxxxxxxxx"}]} However my app just crashes, the log-cat report looks like this: 03-22 02:05:02.140: E/Trace(3776): error opening trace file: No such file or directory (2) 03-22 02:05:04.590: E/AndroidRuntime(3776): FATAL EXCEPTION: main 03-22 02:05:04.590: E/AndroidRuntime(3776): android.os.NetworkOnMainThreadException 03-22 02:05:04.590: E/AndroidRuntime(3776): at android.os.StrictMode$AndroidBlockGuardPolicy.onNetwork(StrictMode.java:1117) 03-22 02:05:04.590: E/AndroidRuntime(3776): at libcore.io.BlockGuardOs.connect(BlockGuardOs.java:84) 03-22 02:05:04.590: E/AndroidRuntime(3776): at libcore.io.IoBridge.connectErrno(IoBridge.java:127) 03-22 02:05:04.590: E/AndroidRuntime(3776): at libcore.io.IoBridge.connect(IoBridge.java:112) 03-22 02:05:04.590: E/AndroidRuntime(3776): at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:192) 03-22 02:05:04.590: E/AndroidRuntime(3776): at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:459) 03-22 02:05:04.590: E/AndroidRuntime(3776): at java.net.Socket.connect(Socket.java:842) 03-22 02:05:04.590: E/AndroidRuntime(3776): at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:119) 03-22 02:05:04.590: E/AndroidRuntime(3776): at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:144) 03-22 02:05:04.590: E/AndroidRuntime(3776): at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:164) 03-22 02:05:04.590: E/AndroidRuntime(3776): at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:119) 03-22 02:05:04.590: E/AndroidRuntime(3776): at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:360) 03-22 02:05:04.590: E/AndroidRuntime(3776): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:555) 03-22 02:05:04.590: E/AndroidRuntime(3776): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:487) 03-22 02:05:04.590: E/AndroidRuntime(3776): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:465) 03-22 02:05:04.590: E/AndroidRuntime(3776): at com.example.connecttest.JSONParser.makeHttpRequest(JSONParser.java:62) 03-22 02:05:04.590: E/AndroidRuntime(3776): at com.example.connecttest.ShowComedianActivity$GetComedianDetails$1.run(ShowComedianActivity.java:89) 03-22 02:05:04.590: E/AndroidRuntime(3776): at android.os.Handler.handleCallback(Handler.java:615) 03-22 02:05:04.590: E/AndroidRuntime(3776): at android.os.Handler.dispatchMessage(Handler.java:92) 03-22 02:05:04.590: E/AndroidRuntime(3776): at android.os.Looper.loop(Looper.java:137) 03-22 02:05:04.590: E/AndroidRuntime(3776): at android.app.ActivityThread.main(ActivityThread.java:4745) 03-22 02:05:04.590: E/AndroidRuntime(3776): at java.lang.reflect.Method.invokeNative(Native Method) 03-22 02:05:04.590: E/AndroidRuntime(3776): at java.lang.reflect.Method.invoke(Method.java:511) 03-22 02:05:04.590: E/AndroidRuntime(3776): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:786) 03-22 02:05:04.590: E/AndroidRuntime(3776): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:553) 03-22 02:05:04.590: E/AndroidRuntime(3776): at dalvik.system.NativeStart.main(Native Method) From this I'm guessing the error is in the jsonParser.makeHttpRequest however I can't for the life of me figure out what's going wrong and was hoping someone brighter than I could illuminate me.

    Read the article

  • Can't figure out where race condition is occuring

    - by Nik
    I'm using Valgrind --tool=drd to check my application that uses Boost::thread. Basically, the application populates a set of "Book" values with "Kehai" values based on inputs through a socket connection. On a seperate thread, a user can connect and get the books send to them. Its fairly simple, so i figured using a boost::mutex::scoped_lock on the location that serializes the book and the location that clears out the book data should be suffice to prevent any race conditions. Here is the code: void Book::clear() { boost::mutex::scoped_lock lock(dataMutex); for(int i =NUM_KEHAI-1; i >= 0; --i) { bid[i].clear(); ask[i].clear(); } } int Book::copyChangedKehaiToString(char* dst) const { boost::mutex::scoped_lock lock(dataMutex); sprintf(dst, "%-4s%-13s",market.c_str(),meigara.c_str()); int loc = 17; for(int i = 0; i < Book::NUM_KEHAI; ++i) { if(ask[i].changed > 0) { sprintf(dst+loc,"A%i%-21s%-21s%-21s%-8s%-4s",i,ask[i].price.c_str(),ask[i].volume.c_str(),ask[i].number.c_str(),ask[i].postTime.c_str(),ask[i].status.c_str()); loc += 77; } } for(int i = 0; i < Book::NUM_KEHAI; ++i) { if(bid[i].changed > 0) { sprintf(dst+loc,"B%i%-21s%-21s%-21s%-8s%-4s",i,bid[i].price.c_str(),bid[i].volume.c_str(),bid[i].number.c_str(),bid[i].postTime.c_str(),bid[i].status.c_str()); loc += 77; } } return loc; } The clear() function and the copyChangedKehaiToString() function are called in the datagetting thread and data sending thread,respectively. Also, as a note, the class Book: struct Book { private: Book(const Book&); Book& operator=(const Book&); public: static const int NUM_KEHAI=10; struct Kehai; friend struct Book::Kehai; struct Kehai { private: Kehai& operator=(const Kehai&); public: std::string price; std::string volume; std::string number; std::string postTime; std::string status; int changed; Kehai(); void copyFrom(const Kehai& other); Kehai(const Kehai& other); inline void clear() { price.assign(""); volume.assign(""); number.assign(""); postTime.assign(""); status.assign(""); changed = -1; } }; std::vector<Kehai> bid; std::vector<Kehai> ask; tm recTime; mutable boost::mutex dataMutex; Book(); void clear(); int copyChangedKehaiToString(char * dst) const; }; When using valgrind --tool=drd, i get race condition errors such as the one below: ==26330== Conflicting store by thread 1 at 0x0658fbb0 size 4 ==26330== at 0x653AE68: std::string::_M_mutate(unsigned int, unsigned int, unsigned int) (in /usr/lib/libstdc++.so.6.0.8) ==26330== by 0x653AFC9: std::string::_M_replace_safe(unsigned int, unsigned int, char const*, unsigned int) (in /usr/lib/libstdc++.so.6.0.8) ==26330== by 0x653B064: std::string::assign(char const*, unsigned int) (in /usr/lib/libstdc++.so.6.0.8) ==26330== by 0x653B134: std::string::assign(char const*) (in /usr/lib/libstdc++.so.6.0.8) ==26330== by 0x8055D64: Book::Kehai::clear() (Book.h:50) ==26330== by 0x8094A29: Book::clear() (Book.cpp:78) ==26330== by 0x808537E: RealKernel::start() (RealKernel.cpp:86) ==26330== by 0x804D15A: main (main.cpp:164) ==26330== Allocation context: BSS section of /usr/lib/libstdc++.so.6.0.8 ==26330== Other segment start (thread 2) ==26330== at 0x400BB59: pthread_mutex_unlock (drd_pthread_intercepts.c:633) ==26330== by 0xC59565: pthread_mutex_unlock (in /lib/libc-2.5.so) ==26330== by 0x805477C: boost::mutex::unlock() (mutex.hpp:56) ==26330== by 0x80547C9: boost::unique_lock<boost::mutex>::~unique_lock() (locks.hpp:340) ==26330== by 0x80949BA: Book::copyChangedKehaiToString(char*) const (Book.cpp:134) ==26330== by 0x80937EE: BookSerializer::serializeBook(Book const&, std::string const&) (BookSerializer.cpp:41) ==26330== by 0x8092D05: BookSnapshotManager::getSnaphotDataList() (BookSnapshotManager.cpp:72) ==26330== by 0x8088179: SnapshotServer::getDataList() (SnapshotServer.cpp:246) ==26330== by 0x808870F: SnapshotServer::run() (SnapshotServer.cpp:183) ==26330== by 0x808BAF5: boost::_mfi::mf0<void, RealThread>::operator()(RealThread*) const (mem_fn_template.hpp:49) ==26330== by 0x808BB4D: void boost::_bi::list1<boost::_bi::value<RealThread*> >::operator()<boost::_mfi::mf0<void, RealThread>, boost::_bi::list0>(boost::_bi::type<void>, boost::_mfi::mf0<void, RealThread>&, boost::_bi::list0&, int) (bind.hpp:253) ==26330== by 0x808BB90: boost::_bi::bind_t<void, boost::_mfi::mf0<void, RealThread>, boost::_bi::list1<boost::_bi::value<RealThread*> > >::operator()() (bind_template.hpp:20) ==26330== Other segment end (thread 2) ==26330== at 0x400B62A: pthread_mutex_lock (drd_pthread_intercepts.c:580) ==26330== by 0xC59535: pthread_mutex_lock (in /lib/libc-2.5.so) ==26330== by 0x80546B8: boost::mutex::lock() (mutex.hpp:51) ==26330== by 0x805473B: boost::unique_lock<boost::mutex>::lock() (locks.hpp:349) ==26330== by 0x8054769: boost::unique_lock<boost::mutex>::unique_lock(boost::mutex&) (locks.hpp:227) ==26330== by 0x8094711: Book::copyChangedKehaiToString(char*) const (Book.cpp:113) ==26330== by 0x80937EE: BookSerializer::serializeBook(Book const&, std::string const&) (BookSerializer.cpp:41) ==26330== by 0x808870F: SnapshotServer::run() (SnapshotServer.cpp:183) ==26330== by 0x808BAF5: boost::_mfi::mf0<void, RealThread>::operator()(RealThread*) const (mem_fn_template.hpp:49) ==26330== by 0x808BB4D: void boost::_bi::list1<boost::_bi::value<RealThread*> >::operator()<boost::_mfi::mf0<void, RealThread>, boost::_bi::list0>(boost::_bi::type<void>, boost::_mfi::mf0<void, RealThread>&, boost::_bi::list0&, int) (bind.hpp:253) For the life of me, i can't figure out where the race condition is. As far as I can tell, clearing the kehai is done only after having taken the mutex, and the same holds true with copying it to a string. Does anyone have any ideas what could be causing this, or where I should look? Thank you kindly.

    Read the article

  • C socket programming: select() is returning 0 despite messages sent from server

    - by Fantastic Fourier
    Hey all, I'm using select() to recv() messages from server, using TCP/IP. When I send() messages from the server, it returns a reasonable number of bytes, saying it's sent successful. And it does get to the client successfully when I use while loop to just recv(). Everything is fine and dandy. while(1) recv() // obviously pseudocode However, when I try to use select(), select() returns 0 from timeout (which is set to 1 second) and for the life of me I cannot figure out why it doesn't see the messages sent from the server. I should also mention that when the server disconnects, select() doesn't see that either, where as if I were to use recv(), it would return 0 to indicate that the connection using the socket has been closed. Any inputs or thoughts are deeply appreciated. #include <arpa/inet.h> #include <errno.h> #include <fcntl.h> #include <netdb.h> #include <netinet/in.h> #include <pthread.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <strings.h> #include <sys/select.h> #include <sys/socket.h> #include <sys/time.h> #include <sys/types.h> #include <time.h> #include <unistd.h> #define SERVER_PORT 10000 #define MAX_CONNECTION 20 #define MAX_MSG 50 struct client { char c_name[MAX_MSG]; char g_name[MAX_MSG]; int csock; int host; // 0 = not host of a multicast group struct sockaddr_in client_address; struct client * next_host; struct client * next_client; }; struct fd_info { char c_name[MAX_MSG]; int socks_inuse[MAX_CONNECTION]; int sock_fd, max_fd; int exit; struct client * c_sys; struct sockaddr_in c_address[MAX_CONNECTION]; struct sockaddr_in server_address; struct sockaddr_in client_address; fd_set read_set; }; struct message { char c_name[MAX_MSG]; char g_name[MAX_MSG]; char _command[3][MAX_MSG]; char _payload[MAX_MSG]; struct sockaddr_in client_address; struct client peer; }; int main(int argc, char * argv[]) { char * host; char * temp; int i, sockfd; int msg_len, rv, ready; int connection, management, socketread; int sockfds[MAX_CONNECTION]; // for three threads that handle new connections, user inputs and select() for sockets pthread_t connection_handler, manager, socket_reader; struct sockaddr_in server_address, client_address; struct hostent * hserver, cserver; struct timeval timeout; struct message msg; struct fd_info info; info.exit = 0; // exit information: if exit = 1, threads quit info.c_sys = NULL; // looking up from the host database if (argc == 3) { host = argv[1]; // server address strncpy(info.c_name, argv[2], strlen(argv[2])); // client name } else { printf("plz read the manual, kthxbai\n"); exit(1); } printf("host is %s and hp is %p\n", host, hserver); hserver = gethostbyname(host); if (hserver) { printf("host found: %s\n", hserver->h_name ); } else { printf("host not found\n"); exit(1); } // setting up address and port structure information on serverside bzero((char * ) &server_address, sizeof(server_address)); // copy zeroes into string server_address.sin_family = AF_INET; memcpy(&server_address.sin_addr, hserver->h_addr, hserver->h_length); server_address.sin_port = htons(SERVER_PORT); bzero((char * ) &client_address, sizeof(client_address)); // copy zeroes into string client_address.sin_family = AF_INET; client_address.sin_addr.s_addr = htonl(INADDR_ANY); client_address.sin_port = htons(SERVER_PORT); // opening up socket sockfd = socket(AF_INET, SOCK_STREAM, 0); if (sockfd < 0) exit(1); else { printf("socket is opened: %i \n", sockfd); info.sock_fd = sockfd; } // sets up time out option for the bound socket timeout.tv_sec = 1; // seconds timeout.tv_usec = 0; // micro seconds ( 0.5 seconds) setsockopt(sockfd, SOL_SOCKET, SO_RCVTIMEO, &timeout, sizeof(struct timeval)); // binding socket to a port rv = bind(sockfd, (struct sockaddr *) &client_address, sizeof(client_address)); if (rv < 0) { printf("MAIN: ERROR bind() %i: %s\n", errno, strerror(errno)); exit(1); } else printf("socket is bound\n"); printf("MAIN: %li \n", client_address.sin_addr.s_addr); // connecting rv = connect(sockfd, (struct sockaddr *) &server_address, sizeof(server_address)); info.server_address = server_address; info.client_address = client_address; info.sock_fd = sockfd; info.max_fd = sockfd; printf("rv = %i\n", rv); if (rv < 0) { printf("MAIN: ERROR connect() %i: %s\n", errno, strerror(errno)); exit(1); } else printf("connected\n"); fd_set readset; FD_ZERO(&readset); FD_ZERO(&info.read_set); FD_SET(info.sock_fd, &info.read_set); while(1) { readset = info.read_set; printf("MAIN: %i \n", readset); ready = select((info.max_fd)+1, &readset, NULL, NULL, &timeout); if(ready == -1) { sleep(2); printf("TEST: MAIN: ready = -1. %s \n", strerror(errno)); } else if (ready == 0) { sleep(2); printf("TEST: MAIN: ready = 0. %s \n", strerror(errno)); } else if (ready > 0) { printf("TEST: MAIN: ready = %i. %s at socket %i \n", ready, strerror(errno), i); for(i = 0; i < ((info.max_fd)+1); i++) { if(FD_ISSET(i, &readset)) { rv = recv(sockfd, &msg, 500, 0); if(rv < 0) continue; else if(rv > 0) printf("MAIN: TEST: %s %s \n", msg._command[0], msg._payload); else if (rv == 0) { sleep(3); printf("MAIN: TEST: SOCKET CLOSEDDDDDD \n"); } FD_CLR(i, &readset); } } } info.read_set = readset; } // close connection close(sockfd); printf("socket closed. BYE! \n"); return(0); }

    Read the article

  • Building applications with WPF, MVVM and Prism(aka CAG)

    - by skjagini
    In this article I am going to walk through an application using WPF and Prism (aka composite application guidance, CAG) which simulates engaging a taxi (cab).  The rules are simple, the app would have3 screens A login screen to authenticate the user An information screen. A screen to engage the cab and roam around and calculating the total fare Metered Rate of Fare The meter is required to be engaged when a cab is occupied by anyone $3.00 upon entry $0.35 for each additional unit The unit fare is: one-fifth of a mile, when the cab is traveling at 6 miles an hour or more; or 60 seconds when not in motion or traveling at less than 12 miles per hour. Night surcharge of $.50 after 8:00 PM & before 6:00 AM Peak hour Weekday Surcharge of $1.00 Monday - Friday after 4:00 PM & before 8:00 PM New York State Tax Surcharge of $.50 per ride. Example: Friday (2010-10-08) 5:30pm Start at Lexington Ave & E 57th St End at Irving Pl & E 15th St Start = $3.00 Travels 2 miles at less than 6 mph for 15 minutes = $3.50 Travels at more than 12 mph for 5 minutes = $1.75 Peak hour Weekday Surcharge = $1.00 (ride started at 5:30 pm) New York State Tax Surcharge = $0.50 Before we dive into the app, I would like to give brief description about the framework.  If you want to jump on to the source code, scroll all the way to the end of the post. MVVM MVVM pattern is in no way related to the usage of PRISM in your application and should be considered if you are using WPF irrespective of PRISM or not. Lets say you are not familiar with MVVM, your typical UI would involve adding some UI controls like text boxes, a button, double clicking on the button,  generating event handler, calling a method from business layer and updating the user interface, it works most of the time for developing small scale applications. The problem with this approach is that there is some amount of code specific to business logic wrapped in UI specific code which is hard to unit test it, mock it and MVVM helps to solve the exact problem. MVVM stands for Model(M) – View(V) – ViewModel(VM),  based on the interactions with in the three parties it should be called VVMM,  MVVM sounds more like MVC (Model-View-Controller) so the name. Why it should be called VVMM: View – View Model - Model WPF allows to create user interfaces using XAML and MVVM takes it to the next level by allowing complete separation of user interface and business logic. In WPF each view will have a property, DataContext when set to an instance of a class (which happens to be your view model) provides the data the view is interested in, i.e., view interacts with view model and at the same time view model interacts with view through DataContext. Sujith, if view and view model are interacting directly with each other how does MVVM is helping me separation of concerns? Well, the catch is DataContext is of type Object, since it is of type object view doesn’t know exact type of view model allowing views and views models to be loosely coupled. View models aggregate data from models (data access layer, services, etc) and make it available for views through properties, methods etc, i.e., View Models interact with Models. PRISM Prism is provided by Microsoft Patterns and Practices team and it can be downloaded from codeplex for source code,  samples and documentation on msdn.  The name composite implies, to compose user interface from different modules (views) without direct dependencies on each other, again allowing  loosely coupled development. Well Sujith, I can already do that with user controls, why shall I learn another framework?  That’s correct, you can decouple using user controls, but you still have to manage some amount of coupling, like how to do you communicate between the controls, how do you subscribe/unsubscribe, loading/unloading views dynamically. Prism is not a replacement for user controls, provides the following features which greatly help in designing the composite applications. Dependency Injection (DI)/ Inversion of Control (IoC) Modules Regions Event Aggregator  Commands Simply put, MVVM helps building a single view and Prism helps building an application using the views There are other open source alternatives to Prism, like MVVMLight, Cinch, take a look at them as well. Lets dig into the source code.  1. Solution The solution is made of the following projects Framework: Holds the common functionality in building applications using WPF and Prism TaxiClient: Start up project, boot strapping and app styling TaxiCommon: Helps with the business logic TaxiModules: Holds the meat of the application with views and view models TaxiTests: To test the application 2. DI / IoC Dependency Injection (DI) as the name implies refers to injecting dependencies and Inversion of Control (IoC) means the calling code has no direct control on the dependencies, opposite of normal way of programming where dependencies are passed by caller, i.e inversion; aside from some differences in terminology the concept is same in both the cases. The idea behind DI/IoC pattern is to reduce the amount of direct coupling between different components of the application, the higher the dependency the more tightly coupled the application resulting in code which is hard to modify, unit test and mock.  Initializing Dependency Injection through BootStrapper TaxiClient is the starting project of the solution and App (App.xaml)  is the starting class that gets called when you run the application. From the App’s OnStartup method we will invoke BootStrapper.   namespace TaxiClient { /// <summary> /// Interaction logic for App.xaml /// </summary> public partial class App : Application { protected override void OnStartup(StartupEventArgs e) { base.OnStartup(e);   (new BootStrapper()).Run(); } } } BootStrapper is your contact point for initializing the application including dependency injection, creating Shell and other frameworks. We are going to use Unity for DI and there are lot of open source DI frameworks like Spring.Net, StructureMap etc with different feature set  and you can choose a framework based on your preferences. Note that Prism comes with in built support for Unity, for example we are deriving from UnityBootStrapper in our case and for any other DI framework you have to extend the Prism appropriately   namespace TaxiClient { public class BootStrapper: UnityBootstrapper { protected override IModuleCatalog CreateModuleCatalog() { return new ConfigurationModuleCatalog(); } protected override DependencyObject CreateShell() { Framework.FrameworkBootStrapper.Run(Container, Application.Current.Dispatcher);   Shell shell = new Shell(); shell.ResizeMode = ResizeMode.NoResize; shell.Show();   return shell; } } } Lets take a look into  FrameworkBootStrapper to check out how to register with unity container. namespace Framework { public class FrameworkBootStrapper { public static void Run(IUnityContainer container, Dispatcher dispatcher) { UIDispatcher uiDispatcher = new UIDispatcher(dispatcher); container.RegisterInstance<IDispatcherService>(uiDispatcher);   container.RegisterType<IInjectSingleViewService, InjectSingleViewService>( new ContainerControlledLifetimeManager());   . . . } } } In the above code we are registering two components with unity container. You shall observe that we are following two different approaches, RegisterInstance and RegisterType.  With RegisterInstance we are registering an existing instance and the same instance will be returned for every request made for IDispatcherService   and with RegisterType we are requesting unity container to create an instance for us when required, i.e., when I request for an instance for IInjectSingleViewService, unity will create/return an instance of InjectSingleViewService class and with RegisterType we can configure the life time of the instance being created. With ContaienrControllerLifetimeManager, the unity container caches the instance and reuses for any subsequent requests, without recreating a new instance. Lets take a look into FareViewModel.cs and it’s constructor. The constructor takes one parameter IEventAggregator and if you try to find all references in your solution for IEventAggregator, you will not find a single location where an instance of EventAggregator is passed directly to the constructor. The compiler still finds an instance and works fine because Prism is already configured when used with Unity container to return an instance of EventAggregator when requested for IEventAggregator and in this particular case it is called constructor injection. public class FareViewModel:ObservableBase, IDataErrorInfo { ... private IEventAggregator _eventAggregator;   public FareViewModel(IEventAggregator eventAggregator) { _eventAggregator = eventAggregator; InitializePropertyNames(); InitializeModel(); PropertyChanged += OnPropertyChanged; } ... 3. Shell Shells are very similar in operation to Master Pages in asp.net or MDI in Windows Forms. And shells contain regions which display the views, you can have as many regions as you wish in a given view. You can also nest regions. i.e, one region can load a view which in itself may contain other regions. We have to create a shell at the start of the application and are doing it by overriding CreateShell method from BootStrapper From the following Shell.xaml you shall notice that we have two content controls with Region names as ‘MenuRegion’ and ‘MainRegion’.  The idea here is that you can inject any user controls into the regions dynamically, i.e., a Menu User Control for MenuRegion and based on the user action you can load appropriate view into MainRegion.    <Window x:Class="TaxiClient.Shell" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Regions="clr-namespace:Microsoft.Practices.Prism.Regions;assembly=Microsoft.Practices.Prism" Title="Taxi" Height="370" Width="800"> <Grid Margin="2"> <ContentControl Regions:RegionManager.RegionName="MenuRegion" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" />   <ContentControl Grid.Row="1" Regions:RegionManager.RegionName="MainRegion" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" /> <!--<Border Grid.ColumnSpan="2" BorderThickness="2" CornerRadius="3" BorderBrush="LightBlue" />-->   </Grid> </Window> 4. Modules Prism provides the ability to build composite applications and modules play an important role in it. For example if you are building a Mortgage Loan Processor application with 3 components, i.e. customer’s credit history,  existing mortgages, new home/loan information; and consider that the customer’s credit history component involves gathering data about his/her address, background information, job details etc. The idea here using Prism modules is to separate the implementation of these 3 components into their own visual studio projects allowing to build components with no dependency on each other and independently. If we need to add another component to the application, the component can be developed by in house team or some other team in the organization by starting with a new Visual Studio project and adding to the solution at the run time with very little knowledge about the application. Prism modules are defined by implementing the IModule interface and each visual studio project to be considered as a module should implement the IModule interface.  From the BootStrapper.cs you shall observe that we are overriding the method by returning a ConfiguratingModuleCatalog which returns the modules that are registered for the application using the app.config file  and you can also add module using code. Lets take a look into configuration file.   <?xml version="1.0"?> <configuration> <configSections> <section name="modules" type="Microsoft.Practices.Prism.Modularity.ModulesConfigurationSection, Microsoft.Practices.Prism"/> </configSections> <modules> <module assemblyFile="TaxiModules.dll" moduleType="TaxiModules.ModuleInitializer, TaxiModules" moduleName="TaxiModules"/> </modules> </configuration> Here we are adding TaxiModules project to our solution and TaxiModules.ModuleInitializer implements IModule interface   5. Module Mapper With Prism modules you can dynamically add or remove modules from the regions, apart from that Prism also provides API to control adding/removing the views from a region within the same module. Taxi Information Screen: Engage the Taxi Screen: The sample application has two screens, ‘Taxi Information’ and ‘Engage the Taxi’ and they both reside in same module, TaxiModules. ‘Engage the Taxi’ is again made of two user controls, FareView on the left and TotalView on the right. We have created a Shell with two regions, MenuRegion and MainRegion with menu loaded into MenuRegion. We can create a wrapper user control called EngageTheTaxi made of FareView and TotalView and load either TaxiInfo or EngageTheTaxi into MainRegion based on the user action. Though it will work it tightly binds the user controls and for every combination of user controls, we need to create a dummy wrapper control to contain them. Instead we can apply the principles we learned so far from Shell/regions and introduce another template (LeftAndRightRegionView.xaml) made of two regions Region1 (left) and Region2 (right) and load  FareView and TotalView dynamically.  To help with loading of the views dynamically I have introduce an helper an interface, IInjectSingleViewService,  idea suggested by Mike Taulty, a must read blog for .Net developers. using System; using System.Collections.Generic; using System.ComponentModel;   namespace Framework.PresentationUtility.Navigation {   public interface IInjectSingleViewService : INotifyPropertyChanged { IEnumerable<CommandViewDefinition> Commands { get; } IEnumerable<ModuleViewDefinition> Modules { get; }   void RegisterViewForRegion(string commandName, string viewName, string regionName, Type viewType); void ClearViewFromRegion(string viewName, string regionName); void RegisterModule(string moduleName, IList<ModuleMapper> moduleMappers); } } The Interface declares three methods to work with views: RegisterViewForRegion: Registers a view with a particular region. You can register multiple views and their regions under one command.  When this particular command is invoked all the views registered under it will be loaded into their regions. ClearViewFromRegion: To unload a specific view from a region. RegisterModule: The idea is when a command is invoked you can load the UI with set of controls in their default position and based on the user interaction, you can load different contols in to different regions on the fly.  And it is supported ModuleViewDefinition and ModuleMappers as shown below. namespace Framework.PresentationUtility.Navigation { public class ModuleViewDefinition { public string ModuleName { get; set; } public IList<ModuleMapper> ModuleMappers; public ICommand Command { get; set; } }   public class ModuleMapper { public string ViewName { get; set; } public string RegionName { get; set; } public Type ViewType { get; set; } } } 6. Event Aggregator Prism event aggregator enables messaging between components as in Observable pattern, Notifier notifies the Observer which receives notification it is interested in. When it comes to Observable pattern, Observer has to unsubscribes for notifications when it no longer interested in notifications, which allows the Notifier to remove the Observer’s reference from it’s local cache. Though .Net has managed garbage collection it cannot remove inactive the instances referenced by an active instance resulting in memory leak, keeping the Observers in memory as long as Notifier stays in memory.  Developers have to be very careful to unsubscribe when necessary and it often gets overlooked, to overcome these problems Prism Event Aggregator uses weak references to cache the reference (Observer in this case)  and releases the reference (memory) once the instance goes out of scope. Using event aggregator is very simple, declare a generic type of CompositePresenationEvent by inheriting from it. using Microsoft.Practices.Prism.Events; using TaxiCommon.BAO;   namespace TaxiCommon.CompositeEvents { public class TaxiOnMoveEvent:CompositePresentationEvent<TaxiOnMove> { } }   TaxiOnMove.cs includes the properties which we want to exchange between the parties, FareView and TotalView. using System;   namespace TaxiCommon.BAO { public class TaxiOnMove { public TimeSpan MinutesAtTweleveMPH { get; set; } public double MilesAtSixMPH { get; set; } } }   Lets take a look into FareViewodel (Notifier) and how it raises the event.  Here we are raising the event by getting the event through GetEvent<..>() and publishing it with the payload private void OnAddMinutes(object obj) { TaxiOnMove payload = new TaxiOnMove(); if(MilesAtSixMPH != null) payload.MilesAtSixMPH = MilesAtSixMPH.Value; if(MinutesAtTweleveMPH != null) payload.MinutesAtTweleveMPH = new TimeSpan(0,0,MinutesAtTweleveMPH.Value,0);   _eventAggregator.GetEvent<TaxiOnMoveEvent>().Publish(payload); ResetMinutesAndMiles(); } And TotalViewModel(Observer) subscribes to notifications by getting the event through GetEvent<..>() namespace TaxiModules.ViewModels { public class TotalViewModel:ObservableBase { .... private IEventAggregator _eventAggregator;   public TotalViewModel(IEventAggregator eventAggregator) { _eventAggregator = eventAggregator; ... }   private void SubscribeToEvents() { _eventAggregator.GetEvent<TaxiStartedEvent>() .Subscribe(OnTaxiStarted, ThreadOption.UIThread,false,(filter) => true); _eventAggregator.GetEvent<TaxiOnMoveEvent>() .Subscribe(OnTaxiMove, ThreadOption.UIThread, false, (filter) => true); _eventAggregator.GetEvent<TaxiResetEvent>() .Subscribe(OnTaxiReset, ThreadOption.UIThread, false, (filter) => true); }   ... private void OnTaxiMove(TaxiOnMove taxiOnMove) { OnMoveFare fare = new OnMoveFare(taxiOnMove); Fares.Add(fare); SetTotalFare(new []{fare}); }   .... 7. MVVM through example In this section we are going to look into MVVM implementation through example.  I have all the modules declared in a single project, TaxiModules, again it is not necessary to have them into one project. Once the user logs into the application, will be greeted with the ‘Engage the Taxi’ screen which is made of two user controls, FareView.xaml and TotalView.Xaml. As you can see from the solution explorer, each of them have their own code behind files and  ViewModel classes, FareViewMode.cs, TotalViewModel.cs Lets take a look in to the FareView and how it interacts with FareViewModel using MVVM implementation. FareView.xaml acts as a view and FareViewMode.cs is it’s view model. The FareView code behind class   namespace TaxiModules.Views { /// <summary> /// Interaction logic for FareView.xaml /// </summary> public partial class FareView : UserControl { public FareView(FareViewModel viewModel) { InitializeComponent(); this.Loaded += (s, e) => { this.DataContext = viewModel; }; } } } The FareView is bound to FareViewModel through the data context  and you shall observe that DataContext is of type Object, i.e. the FareView doesn’t really know the type of ViewModel (FareViewModel). This helps separation of View and ViewModel as View and ViewModel are independent of each other, you can bind FareView to FareViewModel2 as well and the application compiles just fine. Lets take a look into FareView xaml file  <UserControl x:Class="TaxiModules.Views.FareView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Toolkit="clr-namespace:Microsoft.Windows.Controls;assembly=WPFToolkit" xmlns:Commands="clr-namespace:Microsoft.Practices.Prism.Commands;assembly=Microsoft.Practices.Prism"> <Grid Margin="10" > ....   <Border Style="{DynamicResource innerBorder}" Grid.Row="0" Grid.Column="0" Grid.RowSpan="11" Grid.ColumnSpan="2" Panel.ZIndex="1"/>   <Label Grid.Row="0" Content="Engage the Taxi" Style="{DynamicResource innerHeader}"/> <Label Grid.Row="1" Content="Select the State"/> <ComboBox Grid.Row="1" Grid.Column="1" ItemsSource="{Binding States}" Height="auto"> <ComboBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding Name}"/> </DataTemplate> </ComboBox.ItemTemplate> <ComboBox.SelectedItem> <Binding Path="SelectedState" Mode="TwoWay"/> </ComboBox.SelectedItem> </ComboBox> <Label Grid.Row="2" Content="Select the Date of Entry"/> <Toolkit:DatePicker Grid.Row="2" Grid.Column="1" SelectedDate="{Binding DateOfEntry, ValidatesOnDataErrors=true}" /> <Label Grid.Row="3" Content="Enter time 24hr format"/> <TextBox Grid.Row="3" Grid.Column="1" Text="{Binding TimeOfEntry, TargetNullValue=''}"/> <Button Grid.Row="4" Grid.Column="1" Content="Start the Meter" Commands:Click.Command="{Binding StartMeterCommand}" />   <Label Grid.Row="5" Content="Run the Taxi" Style="{DynamicResource innerHeader}"/> <Label Grid.Row="6" Content="Number of Miles &lt;@6mph"/> <TextBox Grid.Row="6" Grid.Column="1" Text="{Binding MilesAtSixMPH, TargetNullValue='', ValidatesOnDataErrors=true}"/> <Label Grid.Row="7" Content="Number of Minutes @12mph"/> <TextBox Grid.Row="7" Grid.Column="1" Text="{Binding MinutesAtTweleveMPH, TargetNullValue=''}"/> <Button Grid.Row="8" Grid.Column="1" Content="Add Minutes and Miles " Commands:Click.Command="{Binding AddMinutesCommand}"/> <Label Grid.Row="9" Content="Other Operations" Style="{DynamicResource innerHeader}"/> <Button Grid.Row="10" Grid.Column="1" Content="Reset the Meter" Commands:Click.Command="{Binding ResetCommand}"/>   </Grid> </UserControl> The highlighted code from the above code shows data binding, for example ComboBox which displays list of states has it’s ItemsSource bound to States property, with DataTemplate bound to Name and SelectedItem  to SelectedState. You might be wondering what are all these properties and how it is able to bind to them.  The answer lies in data context, i.e., when you bound a control, WPF looks for data context on the root object (Grid in this case) and if it can’t find data context it will look into root’s root, i.e. FareView UserControl and it is bound to FareViewModel.  Each of those properties have be declared on the ViewModel for the View to bind correctly. To put simply, View is bound to ViewModel through data context of type object and every control that is bound on the View actually binds to the public property on the ViewModel. Lets look into the ViewModel code (the following code is not an exact copy of FareViewMode.cs, pasted relevant code for this section)   namespace TaxiModules.ViewModels { public class FareViewModel:ObservableBase, IDataErrorInfo { public List<USState> States { get { return USStates.StateList; } }   public USState SelectedState { get { return _selectedState; } set { _selectedState = value; RaisePropertyChanged(_selectedStatePropertyName); } }   public DateTime? DateOfEntry { get { return _dateOfEntry; } set { _dateOfEntry = value; RaisePropertyChanged(_dateOfEntryPropertyName); } }   public TimeSpan? TimeOfEntry { get { return _timeOfEntry; } set { _timeOfEntry = value; RaisePropertyChanged(_timeOfEntryPropertyName); } }   public double? MilesAtSixMPH { get { return _milesAtSixMPH; } set { _milesAtSixMPH = value; RaisePropertyChanged(_distanceAtSixMPHPropertyName); } }   public int? MinutesAtTweleveMPH { get { return _minutesAtTweleveMPH; } set { _minutesAtTweleveMPH = value; RaisePropertyChanged(_minutesAtTweleveMPHPropertyName); } }   public ICommand StartMeterCommand { get { if(_startMeterCommand == null) { _startMeterCommand = new DelegateCommand<object>(OnStartMeter, CanStartMeter); } return _startMeterCommand; } }   public ICommand AddMinutesCommand { get { if(_addMinutesCommand == null) { _addMinutesCommand = new DelegateCommand<object>(OnAddMinutes, CanAddMinutes); } return _addMinutesCommand; } }   public ICommand ResetCommand { get { if(_resetCommand == null) { _resetCommand = new DelegateCommand<object>(OnResetCommand); } return _resetCommand; } }   } private void OnStartMeter(object obj) { _eventAggregator.GetEvent<TaxiStartedEvent>().Publish( new TaxiStarted() { EngagedOn = DateOfEntry.Value.Date + TimeOfEntry.Value, EngagedState = SelectedState.Value });   _isMeterStarted = true; OnPropertyChanged(this,null); } And views communicate user actions like button clicks, tree view item selections, etc using commands. When user clicks on ‘Start the Meter’ button it invokes the method StartMeterCommand, which calls the method OnStartMeter which publishes the event to TotalViewModel using event aggregator  and TaxiStartedEvent. namespace TaxiModules.ViewModels { public class TotalViewModel:ObservableBase { ... private IEventAggregator _eventAggregator;   public TotalViewModel(IEventAggregator eventAggregator) { _eventAggregator = eventAggregator;   InitializePropertyNames(); InitializeModel(); SubscribeToEvents(); }   public decimal? TotalFare { get { return _totalFare; } set { _totalFare = value; RaisePropertyChanged(_totalFarePropertyName); } } .... private void SubscribeToEvents() { _eventAggregator.GetEvent<TaxiStartedEvent>().Subscribe(OnTaxiStarted, ThreadOption.UIThread,false,(filter) => true); _eventAggregator.GetEvent<TaxiOnMoveEvent>().Subscribe(OnTaxiMove, ThreadOption.UIThread, false, (filter) => true); _eventAggregator.GetEvent<TaxiResetEvent>().Subscribe(OnTaxiReset, ThreadOption.UIThread, false, (filter) => true); }   private void OnTaxiStarted(TaxiStarted taxiStarted) { Fares.Add(new EntryFare()); Fares.Add(new StateTaxFare(taxiStarted)); Fares.Add(new NightSurchargeFare(taxiStarted)); Fares.Add(new PeakHourWeekdayFare(taxiStarted));   SetTotalFare(Fares); }   private void SetTotalFare(IEnumerable<IFare> fares) { TotalFare = (_totalFare ?? 0) + TaxiFareHelper.GetTotalFare(fares); } ....   } }   TotalViewModel subscribes to events, TaxiStartedEvent and rest. When TaxiStartedEvent gets invoked it calls the OnTaxiStarted method which sets the total fare which includes entry fee, state tax, nightly surcharge, peak hour weekday fare.   Note that TotalViewModel derives from ObservableBase which implements the method RaisePropertyChanged which we are invoking in Set of TotalFare property, i.e, once we update the TotalFare property it raises an the event that  allows the TotalFare text box to fetch the new value through the data context. ViewModel is communicating with View through data context and it has no knowledge about View, helping in loose coupling of ViewModel and View.   I have attached the source code (.Net 4.0, Prism 4.0, VS 2010) , download and play with it and don’t forget to leave your comments.  

    Read the article

  • ASA hairpining: I basicaly want to allow 2 spokes to be able to communicate with each other.

    - by Thirst4Knowledge
    ASA Spoke to Spoke Communication I have been looking at spke to spoke comms or "hairpining" for months and have posted on numerouse forums but to no avail. I have a Hub and spoke network where the HUB is an ASA Firewall version 8.2 * I basicaly want to allow 2 spokes to be able to communicate with each other. I think that I have got the concept of the ASA Config for example: same-security-traffic permit intra-interface access-list HQ-LAN extended permit ip ASA-LAN 255.255.248.0 HQ-LAN 255.255.255.0 access-list HQ-LAN extended permit ip 192.168.99.0 255.255.255.0 HQ-LAN 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 HQ-LAN 255.255.255.0 access-list no-nat extended permit ip HQ-LAN 255.255.255.0 192.168.99.0 255.255.255.0 access-list no-nat extended permit ip 192.168.99.0 255.255.255.0 HQ-LAN 255.255.255.0 I think my problem may be that the other spokes are not CIsco Firewalls and I need to work out how to do the alternative setups. I want to at least make sure that my firewall etup is correct then I can move onto the other spokes here is my config: Hostname ASA domain-name mydomain.com names ! interface Ethernet0/0 speed 100 duplex full nameif outside security-level 0 ip address 1.1.1.246 255.255.255.224 ! interface Ethernet0/1 speed 100 duplex full nameif inside security-level 100 ip address 192.168.240.33 255.255.255.224 ! interface Ethernet0/2 description DMZ VLAN-253 speed 100 duplex full nameif DMZ security-level 50 ip address 192.168.254.1 255.255.255.0 ! interface Ethernet0/3 no nameif no security-level no ip address ! boot system disk0:/asa821-k8.bin ftp mode passive clock timezone GMT/BST 0 dns server-group DefaultDNS domain-name mydomain.com same-security-traffic permit inter-interface same-security-traffic permit intra-interface object-group network ASA_LAN_Plus_HQ_LAN network-object ASA_LAN 255.255.248.0 network-object HQ-LAN 255.255.255.0 access-list outside_acl remark Exchange web access-list outside_acl extended permit tcp any host MS-Exchange_server-NAT eq https access-list outside_acl remark PPTP Encapsulation access-list outside_acl extended permit gre any host MS-ISA-Server-NAT access-list outside_acl remark PPTP access-list outside_acl extended permit tcp any host MS-ISA-Server-NAT eq pptp access-list outside_acl remark Intra Http access-list outside_acl extended permit tcp any host MS-ISA-Server-NAT eq www access-list outside_acl remark Intra Https access-list outside_acl extended permit tcp any host MS-ISA-Server-NAT eq https access-list outside_acl remark SSL Server-Https 443 access-list outside_acl remark Https 8443(Open VPN Custom port for SSLVPN client downlaod) access-list outside_acl remark FTP 20 access-list outside_acl remark Http access-list outside_acl extended permit tcp any host OpenVPN-Srvr-NAT object-group DM_INLINE_TCP_1 access-list outside_acl extended permit tcp any host OpenVPN-Srvr-NAT eq 8443 access-list outside_acl extended permit tcp any host OpenVPN-Srvr-NAT eq www access-list outside_acl remark For secure remote Managment-SSH access-list outside_acl extended permit tcp any host OpenVPN-Srvr-NAT eq ssh access-list outside_acl extended permit ip Genimage_Anyconnect 255.255.255.0 ASA_LAN 255.255.248.0 access-list ASP-Live remark Live ASP access-list ASP-Live extended permit ip ASA_LAN 255.255.248.0 192.168.60.0 255.255.255.0 access-list Bo remark Bo access-list Bo extended permit ip ASA_LAN 255.255.248.0 192.168.169.0 255.255.255.0 access-list Bill remark Bill access-list Bill extended permit ip ASA_LAN 255.255.248.0 Bill.15 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 Bill.5 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.149.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.160.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.165.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.144.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.140.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.152.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.153.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.163.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.157.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.167.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.156.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 North-Office-LAN 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.161.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.143.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.137.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.159.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 HQ-LAN 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.169.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.150.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.162.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.166.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.168.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.174.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.127.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.173.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.175.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.176.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.100.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.99.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 10.10.10.0 255.255.255.0 access-list no-nat extended permit ip host 192.168.240.34 Cisco-admin-LAN 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 Genimage_Anyconnect 255.255.255.0 access-list no-nat extended permit ip host Tunnel-DC host HQ-SDSL-Peer access-list no-nat extended permit ip HQ-LAN 255.255.255.0 North-Office-LAN 255.255.255.0 access-list no-nat extended permit ip North-Office-LAN 255.255.255.0 HQ-LAN 255.255.255.0 access-list Car remark Car access-list Car extended permit ip ASA_LAN 255.255.248.0 192.168.165.0 255.255.255.0 access-list Che remark Che access-list Che extended permit ip ASA_LAN 255.255.248.0 192.168.144.0 255.255.255.0 access-list Chi remark Chi access-list Chi extended permit ip ASA_LAN 255.255.248.0 192.168.140.0 255.255.255.0 access-list Cla remark Cla access-list Cla extended permit ip ASA_LAN 255.255.248.0 192.168.152.0 255.255.255.0 access-list Eas remark Eas access-list Eas extended permit ip ASA_LAN 255.255.248.0 192.168.149.0 255.255.255.0 access-list Ess remark Ess access-list Ess extended permit ip ASA_LAN 255.255.248.0 192.168.153.0 255.255.255.0 access-list Gat remark Gat access-list Gat extended permit ip ASA_LAN 255.255.248.0 192.168.163.0 255.255.255.0 access-list Hud remark Hud access-list Hud extended permit ip ASA_LAN 255.255.248.0 192.168.157.0 255.255.255.0 access-list Ilk remark Ilk access-list Ilk extended permit ip ASA_LAN 255.255.248.0 192.168.167.0 255.255.255.0 access-list Ken remark Ken access-list Ken extended permit ip ASA_LAN 255.255.248.0 192.168.156.0 255.255.255.0 access-list North-Office remark North-Office access-list North-Office extended permit ip ASA_LAN 255.255.248.0 North-Office-LAN 255.255.255.0 access-list inside_acl remark Inside_ad access-list inside_acl extended permit ip any any access-list Old_HQ remark Old_HQ access-list Old_HQ extended permit ip ASA_LAN 255.255.248.0 HQ-LAN 255.255.255.0 access-list Old_HQ extended permit ip HQ-LAN 255.255.255.0 192.168.99.0 255.255.255.0 access-list She remark She access-list She extended permit ip ASA_LAN 255.255.248.0 192.168.150.0 255.255.255.0 access-list Lit remark Lit access-list Lit extended permit ip ASA_LAN 255.255.248.0 192.168.143.0 255.255.255.0 access-list Mid remark Mid access-list Mid extended permit ip ASA_LAN 255.255.248.0 192.168.137.0 255.255.255.0 access-list Spi remark Spi access-list Spi extended permit ip ASA_LAN 255.255.248.0 192.168.162.0 255.255.255.0 access-list Tor remark Tor access-list Tor extended permit ip ASA_LAN 255.255.248.0 192.168.166.0 255.255.255.0 access-list Tra remark Tra access-list Tra extended permit ip ASA_LAN 255.255.248.0 192.168.168.0 255.255.255.0 access-list Tru remark Tru access-list Tru extended permit ip ASA_LAN 255.255.248.0 192.168.174.0 255.255.255.0 access-list Yo remark Yo access-list Yo extended permit ip ASA_LAN 255.255.248.0 192.168.127.0 255.255.255.0 access-list Nor remark Nor access-list Nor extended permit ip ASA_LAN 255.255.248.0 192.168.159.0 255.255.255.0 access-list Nor extended permit ip ASA_LAN 255.255.248.0 192.168.173.0 255.255.255.0 inactive access-list ST remark ST access-list ST extended permit ip ASA_LAN 255.255.248.0 192.168.175.0 255.255.255.0 access-list Le remark Le access-list Le extended permit ip ASA_LAN 255.255.248.0 192.168.161.0 255.255.255.0 access-list DMZ-ACL remark DMZ access-list DMZ-ACL extended permit ip host OpenVPN-Srvr any access-list no-nat-dmz remark DMZ -No Nat access-list no-nat-dmz extended permit ip 192.168.250.0 255.255.255.0 HQ-LAN 255.255.255.0 access-list Split_Tunnel_List remark ASA-LAN access-list Split_Tunnel_List standard permit ASA_LAN 255.255.248.0 access-list Split_Tunnel_List standard permit Genimage_Anyconnect 255.255.255.0 access-list outside_cryptomap_30 remark Po access-list outside_cryptomap_30 extended permit ip ASA_LAN 255.255.248.0 Po 255.255.255.0 access-list outside_cryptomap_24 extended permit ip ASA_LAN 255.255.248.0 192.168.100.0 255.255.255.0 access-list outside_cryptomap_16 extended permit ip ASA_LAN 255.255.248.0 192.168.99.0 255.255.255.0 access-list outside_cryptomap_34 extended permit ip ASA_LAN 255.255.248.0 10.10.10.0 255.255.255.0 access-list outside_31_cryptomap extended permit ip host 192.168.240.34 Cisco-admin-LAN 255.255.255.0 access-list outside_32_cryptomap extended permit ip host Tunnel-DC host HQ-SDSL-Peer access-list Genimage_VPN_Any_connect_pix_client remark Genimage "Any Connect" VPN access-list Genimage_VPN_Any_connect_pix_client standard permit Genimage_Anyconnect 255.255.255.0 access-list Split-Tunnel-ACL standard permit ASA_LAN 255.255.248.0 access-list nonat extended permit ip HQ-LAN 255.255.255.0 192.168.99.0 255.255.255.0 pager lines 24 logging enable logging timestamp logging console notifications logging monitor notifications logging buffered warnings logging asdm informational no logging message 106015 no logging message 313001 no logging message 313008 no logging message 106023 no logging message 710003 no logging message 106100 no logging message 302015 no logging message 302014 no logging message 302013 no logging message 302018 no logging message 302017 no logging message 302016 no logging message 302021 no logging message 302020 flow-export destination inside MS-ISA-Server 2055 flow-export destination outside 192.168.130.126 2055 flow-export template timeout-rate 1 flow-export delay flow-create 15 mtu outside 1500 mtu inside 1500 mtu DMZ 1500 mtu management 1500 ip local pool RAS-VPN 10.0.0.1.1-10.0.0.1.254 mask 255.255.255.255 icmp unreachable rate-limit 1 burst-size 1 icmp permit any unreachable outside icmp permit any echo outside icmp permit any echo-reply outside icmp permit any outside icmp permit any echo inside icmp permit any echo-reply inside icmp permit any echo DMZ icmp permit any echo-reply DMZ asdm image disk0:/asdm-621.bin no asdm history enable arp timeout 14400 nat-control global (outside) 1 interface global (inside) 1 interface nat (inside) 0 access-list no-nat nat (inside) 1 0.0.0.0 0.0.0.0 nat (DMZ) 0 access-list no-nat-dmz static (inside,outside) MS-ISA-Server-NAT MS-ISA-Server netmask 255.255.255.255 static (DMZ,outside) OpenVPN-Srvr-NAT OpenVPN-Srvr netmask 255.255.255.255 static (inside,outside) MS-Exchange_server-NAT MS-Exchange_server netmask 255.255.255.255 access-group outside_acl in interface outside access-group inside_acl in interface inside access-group DMZ-ACL in interface DMZ route outside 0.0.0.0 0.0.0.0 1.1.1.225 1 route inside 10.10.10.0 255.255.255.0 192.168.240.34 1 route outside Genimage_Anyconnect 255.255.255.0 1.1.1.225 1 route inside Open-VPN 255.255.248.0 OpenVPN-Srvr 1 route inside HQledon-Voice-LAN 255.255.255.0 192.168.240.34 1 route outside Bill 255.255.255.0 1.1.1.225 1 route outside Yo 255.255.255.0 1.1.1.225 1 route inside 192.168.129.0 255.255.255.0 192.168.240.34 1 route outside HQ-LAN 255.255.255.0 1.1.1.225 1 route outside Mid 255.255.255.0 1.1.1.225 1 route outside 192.168.140.0 255.255.255.0 1.1.1.225 1 route outside 192.168.143.0 255.255.255.0 1.1.1.225 1 route outside 192.168.144.0 255.255.255.0 1.1.1.225 1 route outside 192.168.149.0 255.255.255.0 1.1.1.225 1 route outside 192.168.152.0 255.255.255.0 1.1.1.225 1 route outside 192.168.153.0 255.255.255.0 1.1.1.225 1 route outside North-Office-LAN 255.255.255.0 1.1.1.225 1 route outside 192.168.156.0 255.255.255.0 1.1.1.225 1 route outside 192.168.157.0 255.255.255.0 1.1.1.225 1 route outside 192.168.159.0 255.255.255.0 1.1.1.225 1 route outside 192.168.160.0 255.255.255.0 1.1.1.225 1 route outside 192.168.161.0 255.255.255.0 1.1.1.225 1 route outside 192.168.162.0 255.255.255.0 1.1.1.225 1 route outside 192.168.163.0 255.255.255.0 1.1.1.225 1 route outside 192.168.165.0 255.255.255.0 1.1.1.225 1 route outside 192.168.166.0 255.255.255.0 1.1.1.225 1 route outside 192.168.167.0 255.255.255.0 1.1.1.225 1 route outside 192.168.168.0 255.255.255.0 1.1.1.225 1 route outside 192.168.173.0 255.255.255.0 1.1.1.225 1 route outside 192.168.174.0 255.255.255.0 1.1.1.225 1 route outside 192.168.175.0 255.255.255.0 1.1.1.225 1 route outside 192.168.99.0 255.255.255.0 1.1.1.225 1 route inside ASA_LAN 255.255.255.0 192.168.240.34 1 route inside 192.168.124.0 255.255.255.0 192.168.240.34 1 route inside 192.168.50.0 255.255.255.0 192.168.240.34 1 route inside 192.168.51.0 255.255.255.128 192.168.240.34 1 route inside 192.168.240.0 255.255.255.224 192.168.240.34 1 route inside 192.168.240.164 255.255.255.224 192.168.240.34 1 route inside 192.168.240.196 255.255.255.224 192.168.240.34 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa-server vpn protocol radius max-failed-attempts 5 aaa-server vpn (inside) host 192.168.X.2 timeout 60 key a5a53r3t authentication-port 1812 radius-common-pw a5a53r3t aaa authentication ssh console LOCAL aaa authentication http console LOCAL http server enable http 0.0.0.0 0.0.0.0 inside http 1.1.1.2 255.255.255.255 outside http 1.1.1.234 255.255.255.255 outside http 0.0.0.0 0.0.0.0 management http 1.1.100.198 255.255.255.255 outside http 0.0.0.0 0.0.0.0 outside crypto map FW_Outside_map 1 match address Bill crypto map FW_Outside_map 1 set peer x.x.x.121 crypto map FW_Outside_map 1 set transform-set SECURE crypto map FW_Outside_map 2 match address Bo crypto map FW_Outside_map 2 set peer x.x.x.202 crypto map FW_Outside_map 2 set transform-set SECURE crypto map FW_Outside_map 3 match address ASP-Live crypto map FW_Outside_map 3 set peer x.x.x.113 crypto map FW_Outside_map 3 set transform-set SECURE crypto map FW_Outside_map 4 match address Car crypto map FW_Outside_map 4 set peer x.x.x.205 crypto map FW_Outside_map 4 set transform-set SECURE crypto map FW_Outside_map 5 match address Old_HQ crypto map FW_Outside_map 5 set peer x.x.x.2 crypto map FW_Outside_map 5 set transform-set SECURE WG crypto map FW_Outside_map 6 match address Che crypto map FW_Outside_map 6 set peer x.x.x.204 crypto map FW_Outside_map 6 set transform-set SECURE crypto map FW_Outside_map 7 match address Chi crypto map FW_Outside_map 7 set peer x.x.x.212 crypto map FW_Outside_map 7 set transform-set SECURE crypto map FW_Outside_map 8 match address Cla crypto map FW_Outside_map 8 set peer x.x.x.215 crypto map FW_Outside_map 8 set transform-set SECURE crypto map FW_Outside_map 9 match address Eas crypto map FW_Outside_map 9 set peer x.x.x.247 crypto map FW_Outside_map 9 set transform-set SECURE crypto map FW_Outside_map 10 match address Ess crypto map FW_Outside_map 10 set peer x.x.x.170 crypto map FW_Outside_map 10 set transform-set SECURE crypto map FW_Outside_map 11 match address Hud crypto map FW_Outside_map 11 set peer x.x.x.8 crypto map FW_Outside_map 11 set transform-set SECURE crypto map FW_Outside_map 12 match address Gat crypto map FW_Outside_map 12 set peer x.x.x.212 crypto map FW_Outside_map 12 set transform-set SECURE crypto map FW_Outside_map 13 match address Ken crypto map FW_Outside_map 13 set peer x.x.x.230 crypto map FW_Outside_map 13 set transform-set SECURE crypto map FW_Outside_map 14 match address She crypto map FW_Outside_map 14 set peer x.x.x.24 crypto map FW_Outside_map 14 set transform-set SECURE crypto map FW_Outside_map 15 match address North-Office crypto map FW_Outside_map 15 set peer x.x.x.94 crypto map FW_Outside_map 15 set transform-set SECURE crypto map FW_Outside_map 16 match address outside_cryptomap_16 crypto map FW_Outside_map 16 set peer x.x.x.134 crypto map FW_Outside_map 16 set transform-set SECURE crypto map FW_Outside_map 16 set security-association lifetime seconds crypto map FW_Outside_map 17 match address Lit crypto map FW_Outside_map 17 set peer x.x.x.110 crypto map FW_Outside_map 17 set transform-set SECURE crypto map FW_Outside_map 18 match address Mid crypto map FW_Outside_map 18 set peer 78.x.x.110 crypto map FW_Outside_map 18 set transform-set SECURE crypto map FW_Outside_map 19 match address Sp crypto map FW_Outside_map 19 set peer x.x.x.47 crypto map FW_Outside_map 19 set transform-set SECURE crypto map FW_Outside_map 20 match address Tor crypto map FW_Outside_map 20 set peer x.x.x.184 crypto map FW_Outside_map 20 set transform-set SECURE crypto map FW_Outside_map 21 match address Tr crypto map FW_Outside_map 21 set peer x.x.x.75 crypto map FW_Outside_map 21 set transform-set SECURE crypto map FW_Outside_map 22 match address Yo crypto map FW_Outside_map 22 set peer x.x.x.40 crypto map FW_Outside_map 22 set transform-set SECURE crypto map FW_Outside_map 23 match address Tra crypto map FW_Outside_map 23 set peer x.x.x.145 crypto map FW_Outside_map 23 set transform-set SECURE crypto map FW_Outside_map 24 match address outside_cryptomap_24 crypto map FW_Outside_map 24 set peer x.x.x.46 crypto map FW_Outside_map 24 set transform-set SECURE crypto map FW_Outside_map 24 set security-association lifetime seconds crypto map FW_Outside_map 25 match address Nor crypto map FW_Outside_map 25 set peer x.x.x.70 crypto map FW_Outside_map 25 set transform-set SECURE crypto map FW_Outside_map 26 match address Ilk crypto map FW_Outside_map 26 set peer x.x.x.65 crypto map FW_Outside_map 26 set transform-set SECURE crypto map FW_Outside_map 27 match address Nor crypto map FW_Outside_map 27 set peer x.x.x.240 crypto map FW_Outside_map 27 set transform-set SECURE crypto map FW_Outside_map 28 match address ST crypto map FW_Outside_map 28 set peer x.x.x.163 crypto map FW_Outside_map 28 set transform-set SECURE crypto map FW_Outside_map 28 set security-association lifetime seconds crypto map FW_Outside_map 28 set security-association lifetime kilobytes crypto map FW_Outside_map 29 match address Lei crypto map FW_Outside_map 29 set peer x.x.x.4 crypto map FW_Outside_map 29 set transform-set SECURE crypto map FW_Outside_map 30 match address outside_cryptomap_30 crypto map FW_Outside_map 30 set peer x.x.x.34 crypto map FW_Outside_map 30 set transform-set SECURE crypto map FW_Outside_map 31 match address outside_31_cryptomap crypto map FW_Outside_map 31 set pfs crypto map FW_Outside_map 31 set peer Cisco-admin-Peer crypto map FW_Outside_map 31 set transform-set ESP-AES-256-SHA crypto map FW_Outside_map 32 match address outside_32_cryptomap crypto map FW_Outside_map 32 set pfs crypto map FW_Outside_map 32 set peer HQ-SDSL-Peer crypto map FW_Outside_map 32 set transform-set ESP-AES-256-SHA crypto map FW_Outside_map 34 match address outside_cryptomap_34 crypto map FW_Outside_map 34 set peer x.x.x.246 crypto map FW_Outside_map 34 set transform-set ESP-AES-128-SHA ESP-AES-192-SHA ESP-AES-256-SHA crypto map FW_Outside_map 65535 ipsec-isakmp dynamic dynmap crypto map FW_Outside_map interface outside crypto map FW_outside_map 31 set peer x.x.x.45 crypto isakmp identity address crypto isakmp enable outside crypto isakmp policy 9 webvpn enable outside svc enable group-policy ASA-LAN-VPN internal group-policy ASA_LAN-VPN attributes wins-server value 192.168.x.1 192.168.x.2 dns-server value 192.168.x.1 192.168.x.2 vpn-tunnel-protocol IPSec svc split-tunnel-policy tunnelspecified split-tunnel-network-list value Split-Tunnel-ACL default-domain value MYdomain username xxxxxxxxxx password privilege 15 tunnel-group DefaultRAGroup ipsec-attributes isakmp keepalive threshold 30 retry 2 tunnel-group DefaultWEBVPNGroup ipsec-attributes isakmp keepalive threshold 30 retry 2 tunnel-group x.x.x.121 type ipsec-l2l tunnel-group x.x.x..121 ipsec-attributes pre-shared-key * isakmp keepalive threshold 30 retry 2 tunnel-group x.x.x.202 type ipsec-l2l tunnel-group x.x.x.202 ipsec-attributes pre-shared-key * isakmp keepalive threshold 30 retry 2 tunnel-group x.x.x.113 type ipsec-l2l tunnel-group x.x.x.113 ipsec-attributes pre-shared-key * isakmp keepalive threshold 30 retry 2 tunnel-group x.x.x.205 type ipsec-l2l tunnel-group x.x.x.205 ipsec-attributes pre-shared-key * isakmp keepalive threshold 30 retry 2 tunnel-group x.x.x.204 type ipsec-l2l tunnel-group x.x.x.204 ipsec-attributes pre-shared-key * isakmp keepalive threshold 30 retry 2 tunnel-group x.x.x.212 type ipsec-l2l tunnel-group x.x.x.212 ipsec-attributes pre-shared-key * tunnel-group x.x.x.215 type ipsec-l2l tunnel-group x.x.x.215 ipsec-attributes pre-shared-key * tunnel-group x.x.x.247 type ipsec-l2l tunnel-group x.x.x.247 ipsec-attributes pre-shared-key * tunnel-group x.x.x.170 type ipsec-l2l tunnel-group x.x.x.170 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x..8 type ipsec-l2l tunnel-group x.x.x.8 ipsec-attributes pre-shared-key * tunnel-group x.x.x.212 type ipsec-l2l tunnel-group x.x.x.212 ipsec-attributes pre-shared-key * tunnel-group x.x.x.230 type ipsec-l2l tunnel-group x.x.x.230 ipsec-attributes pre-shared-key * tunnel-group x.x.x.24 type ipsec-l2l tunnel-group x.x.x.24 ipsec-attributes pre-shared-key * tunnel-group x.x.x.46 type ipsec-l2l tunnel-group x.x.x.46 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.4 type ipsec-l2l tunnel-group x.x.x.4 ipsec-attributes pre-shared-key * tunnel-group x.x.x.110 type ipsec-l2l tunnel-group x.x.x.110 ipsec-attributes pre-shared-key * tunnel-group 78.x.x.110 type ipsec-l2l tunnel-group 78.x.x.110 ipsec-attributes pre-shared-key * tunnel-group x.x.x.47 type ipsec-l2l tunnel-group x.x.x.47 ipsec-attributes pre-shared-key * tunnel-group x.x.x.34 type ipsec-l2l tunnel-group x.x.x.34 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x..129 type ipsec-l2l tunnel-group x.x.x.129 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.94 type ipsec-l2l tunnel-group x.x.x.94 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.40 type ipsec-l2l tunnel-group x.x.x.40 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.65 type ipsec-l2l tunnel-group x.x.x.65 ipsec-attributes pre-shared-key * tunnel-group x.x.x.70 type ipsec-l2l tunnel-group x.x.x.70 ipsec-attributes pre-shared-key * tunnel-group x.x.x.134 type ipsec-l2l tunnel-group x.x.x.134 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.163 type ipsec-l2l tunnel-group x.x.x.163 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.2 type ipsec-l2l tunnel-group x.x.x.2 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group ASA-LAN-VPN type remote-access tunnel-group ASA-LAN-VPN general-attributes address-pool RAS-VPN authentication-server-group vpn authentication-server-group (outside) vpn default-group-policy ASA-LAN-VPN tunnel-group ASA-LAN-VPN ipsec-attributes pre-shared-key * tunnel-group x.x.x.184 type ipsec-l2l tunnel-group x.x.x.184 ipsec-attributes pre-shared-key * tunnel-group x.x.x.145 type ipsec-l2l tunnel-group x.x.x.145 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.75 type ipsec-l2l tunnel-group x.x.x.75 ipsec-attributes pre-shared-key * tunnel-group x.x.x.246 type ipsec-l2l tunnel-group x.x.x.246 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.2 type ipsec-l2l tunnel-group x.x.x..2 ipsec-attributes pre-shared-key * tunnel-group x.x.x.98 type ipsec-l2l tunnel-group x.x.x.98 ipsec-attributes pre-shared-key * ! ! ! policy-map global_policy description Netflow class class-default flow-export event-type all destination MS-ISA-Server policy-map type inspect dns migrated_dns_map_1 parameters message-length maximum 512 Anyone have a clue because Im on the verge of going postal.....

    Read the article

  • Agile Development

    - by James Oloo Onyango
    Alot of literature has and is being written about agile developement and its surrounding philosophies. In my quest to find the best way to express the importance of agile methodologies, i have found Robert C. Martin's "A Satire Of Two Companies" to be both the most concise and thorough! Enjoy the read! Rufus Inc Project Kick Off Your name is Bob. The date is January 3, 2001, and your head still aches from the recent millennial revelry. You are sitting in a conference room with several managers and a group of your peers. You are a project team leader. Your boss is there, and he has brought along all of his team leaders. His boss called the meeting. "We have a new project to develop," says your boss's boss. Call him BB. The points in his hair are so long that they scrape the ceiling. Your boss's points are just starting to grow, but he eagerly awaits the day when he can leave Brylcream stains on the acoustic tiles. BB describes the essence of the new market they have identified and the product they want to develop to exploit this market. "We must have this new project up and working by fourth quarter October 1," BB demands. "Nothing is of higher priority, so we are cancelling your current project." The reaction in the room is stunned silence. Months of work are simply going to be thrown away. Slowly, a murmur of objection begins to circulate around the conference table.   His points give off an evil green glow as BB meets the eyes of everyone in the room. One by one, that insidious stare reduces each attendee to quivering lumps of protoplasm. It is clear that he will brook no discussion on this matter. Once silence has been restored, BB says, "We need to begin immediately. How long will it take you to do the analysis?" You raise your hand. Your boss tries to stop you, but his spitwad misses you and you are unaware of his efforts.   "Sir, we can't tell you how long the analysis will take until we have some requirements." "The requirements document won't be ready for 3 or 4 weeks," BB says, his points vibrating with frustration. "So, pretend that you have the requirements in front of you now. How long will you require for analysis?" No one breathes. Everyone looks around to see whether anyone has some idea. "If analysis goes beyond April 1, we have a problem. Can you finish the analysis by then?" Your boss visibly gathers his courage: "We'll find a way, sir!" His points grow 3 mm, and your headache increases by two Tylenol. "Good." BB smiles. "Now, how long will it take to do the design?" "Sir," you say. Your boss visibly pales. He is clearly worried that his 3 mms are at risk. "Without an analysis, it will not be possible to tell you how long design will take." BB's expression shifts beyond austere.   "PRETEND you have the analysis already!" he says, while fixing you with his vacant, beady little eyes. "How long will it take you to do the design?" Two Tylenol are not going to cut it. Your boss, in a desperate attempt to save his new growth, babbles: "Well, sir, with only six months left to complete the project, design had better take no longer than 3 months."   "I'm glad you agree, Smithers!" BB says, beaming. Your boss relaxes. He knows his points are secure. After a while, he starts lightly humming the Brylcream jingle. BB continues, "So, analysis will be complete by April 1, design will be complete by July 1, and that gives you 3 months to implement the project. This meeting is an example of how well our new consensus and empowerment policies are working. Now, get out there and start working. I'll expect to see TQM plans and QIT assignments on my desk by next week. Oh, and don't forget that your crossfunctional team meetings and reports will be needed for next month's quality audit." "Forget the Tylenol," you think to yourself as you return to your cubicle. "I need bourbon."   Visibly excited, your boss comes over to you and says, "Gosh, what a great meeting. I think we're really going to do some world shaking with this project." You nod in agreement, too disgusted to do anything else. "Oh," your boss continues, "I almost forgot." He hands you a 30-page document. "Remember that the SEI is coming to do an evaluation next week. This is the evaluation guide. You need to read through it, memorize it, and then shred it. It tells you how to answer any questions that the SEI auditors ask you. It also tells you what parts of the building you are allowed to take them to and what parts to avoid. We are determined to be a CMM level 3 organization by June!"   You and your peers start working on the analysis of the new project. This is difficult because you have no requirements. But from the 10-minute introduction given by BB on that fateful morning, you have some idea of what the product is supposed to do.   Corporate process demands that you begin by creating a use case document. You and your team begin enumerating use cases and drawing oval and stick diagrams. Philosophical debates break out among the team members. There is disagreement as to whether certain use cases should be connected with <<extends>> or <<includes>> relationships. Competing models are created, but nobody knows how to evaluate them. The debate continues, effectively paralyzing progress.   After a week, somebody finds the iceberg.com Web site, which recommends disposing entirely of <<extends>> and <<includes>> and replacing them with <<precedes>> and <<uses>>. The documents on this Web site, authored by Don Sengroiux, describes a method known as stalwart-analysis, which claims to be a step-by-step method for translating use cases into design diagrams. More competing use case models are created using this new scheme, but again, people can't agree on how to evaluate them. The thrashing continues. More and more, the use case meetings are driven by emotion rather than by reason. If it weren't for the fact that you don't have requirements, you'd be pretty upset by the lack of progress you are making. The requirements document arrives on February 15. And then again on February 20, 25, and every week thereafter. Each new version contradicts the previous one. Clearly, the marketing folks who are writing the requirements, empowered though they might be, are not finding consensus.   At the same time, several new competing use case templates have been proposed by the various team members. Each template presents its own particularly creative way of delaying progress. The debates rage on. On March 1, Prudence Putrigence, the process proctor, succeeds in integrating all the competing use case forms and templates into a single, all-encompassing form. Just the blank form is 15 pages long. She has managed to include every field that appeared on all the competing templates. She also presents a 159- page document describing how to fill out the use case form. All current use cases must be rewritten according to the new standard.   You marvel to yourself that it now requires 15 pages of fill-in-the-blank and essay questions to answer the question: What should the system do when the user presses Return? The corporate process (authored by L. E. Ott, famed author of "Holistic Analysis: A Progressive Dialectic for Software Engineers") insists that you discover all primary use cases, 87 percent of all secondary use cases, and 36.274 percent of all tertiary use cases before you can complete analysis and enter the design phase. You have no idea what a tertiary use case is. So in an attempt to meet this requirement, you try to get your use case document reviewed by the marketing department, which you hope will know what a tertiary use case is.   Unfortunately, the marketing folks are too busy with sales support to talk to you. Indeed, since the project started, you have not been able to get a single meeting with marketing, which has provided a never-ending stream of changing and contradictory requirements documents.   While one team has been spinning endlessly on the use case document, another team has been working out the domain model. Endless variations of UML documents are pouring out of this team. Every week, the model is reworked.   The team members can't decide whether to use <<interfaces>> or <<types>> in the model. A huge disagreement has been raging on the proper syntax and application of OCL. Others on the team just got back from a 5-day class on catabolism, and have been producing incredibly detailed and arcane diagrams that nobody else can fathom.   On March 27, with one week to go before analysis is to be complete, you have produced a sea of documents and diagrams but are no closer to a cogent analysis of the problem than you were on January 3. **** And then, a miracle happens.   **** On Saturday, April 1, you check your e-mail from home. You see a memo from your boss to BB. It states unequivocally that you are done with the analysis! You phone your boss and complain. "How could you have told BB that we were done with the analysis?" "Have you looked at a calendar lately?" he responds. "It's April 1!" The irony of that date does not escape you. "But we have so much more to think about. So much more to analyze! We haven't even decided whether to use <<extends>> or <<precedes>>!" "Where is your evidence that you are not done?" inquires your boss, impatiently. "Whaaa . . . ." But he cuts you off. "Analysis can go on forever; it has to be stopped at some point. And since this is the date it was scheduled to stop, it has been stopped. Now, on Monday, I want you to gather up all existing analysis materials and put them into a public folder. Release that folder to Prudence so that she can log it in the CM system by Monday afternoon. Then get busy and start designing."   As you hang up the phone, you begin to consider the benefits of keeping a bottle of bourbon in your bottom desk drawer. They threw a party to celebrate the on-time completion of the analysis phase. BB gave a colon-stirring speech on empowerment. And your boss, another 3 mm taller, congratulated his team on the incredible show of unity and teamwork. Finally, the CIO takes the stage to tell everyone that the SEI audit went very well and to thank everyone for studying and shredding the evaluation guides that were passed out. Level 3 now seems assured and will be awarded by June. (Scuttlebutt has it that managers at the level of BB and above are to receive significant bonuses once the SEI awards level 3.)   As the weeks flow by, you and your team work on the design of the system. Of course, you find that the analysis that the design is supposedly based on is flawedno, useless; no, worse than useless. But when you tell your boss that you need to go back and work some more on the analysis to shore up its weaker sections, he simply states, "The analysis phase is over. The only allowable activity is design. Now get back to it."   So, you and your team hack the design as best you can, unsure of whether the requirements have been properly analyzed. Of course, it really doesn't matter much, since the requirements document is still thrashing with weekly revisions, and the marketing department still refuses to meet with you.     The design is a nightmare. Your boss recently misread a book named The Finish Line in which the author, Mark DeThomaso, blithely suggested that design documents should be taken down to code-level detail. "If we are going to be working at that level of detail," you ask, "why don't we simply write the code instead?" "Because then you wouldn't be designing, of course. And the only allowable activity in the design phase is design!" "Besides," he continues, "we have just purchased a companywide license for Dandelion! This tool enables 'Round the Horn Engineering!' You are to transfer all design diagrams into this tool. It will automatically generate our code for us! It will also keep the design diagrams in sync with the code!" Your boss hands you a brightly colored shrinkwrapped box containing the Dandelion distribution. You accept it numbly and shuffle off to your cubicle. Twelve hours, eight crashes, one disk reformatting, and eight shots of 151 later, you finally have the tool installed on your server. You consider the week your team will lose while attending Dandelion training. Then you smile and think, "Any week I'm not here is a good week." Design diagram after design diagram is created by your team. Dandelion makes it very difficult to draw these diagrams. There are dozens and dozens of deeply nested dialog boxes with funny text fields and check boxes that must all be filled in correctly. And then there's the problem of moving classes between packages. At first, these diagram are driven from the use cases. But the requirements are changing so often that the use cases rapidly become meaningless. Debates rage about whether VISITOR or DECORATOR design patterns should be used. One developer refuses to use VISITOR in any form, claiming that it's not a properly object-oriented construct. Someone refuses to use multiple inheritance, since it is the spawn of the devil. Review meetings rapidly degenerate into debates about the meaning of object orientation, the definition of analysis versus design, or when to use aggregation versus association. Midway through the design cycle, the marketing folks announce that they have rethought the focus of the system. Their new requirements document is completely restructured. They have eliminated several major feature areas and replaced them with feature areas that they anticipate customer surveys will show to be more appropriate. You tell your boss that these changes mean that you need to reanalyze and redesign much of the system. But he says, "The analysis phase is system. But he says, "The analysis phase is over. The only allowable activity is design. Now get back to it."   You suggest that it might be better to create a simple prototype to show to the marketing folks and even some potential customers. But your boss says, "The analysis phase is over. The only allowable activity is design. Now get back to it." Hack, hack, hack, hack. You try to create some kind of a design document that might reflect the new requirements documents. However, the revolution of the requirements has not caused them to stop thrashing. Indeed, if anything, the wild oscillations of the requirements document have only increased in frequency and amplitude.   You slog your way through them.   On June 15, the Dandelion database gets corrupted. Apparently, the corruption has been progressive. Small errors in the DB accumulated over the months into bigger and bigger errors. Eventually, the CASE tool just stopped working. Of course, the slowly encroaching corruption is present on all the backups. Calls to the Dandelion technical support line go unanswered for several days. Finally, you receive a brief e-mail from Dandelion, informing you that this is a known problem and that the solution is to purchase the new version, which they promise will be ready some time next quarter, and then reenter all the diagrams by hand.   ****   Then, on July 1 another miracle happens! You are done with the design!   Rather than go to your boss and complain, you stock your middle desk drawer with some vodka.   **** They threw a party to celebrate the on-time completion of the design phase and their graduation to CMM level 3. This time, you find BB's speech so stirring that you have to use the restroom before it begins. New banners and plaques are all over your workplace. They show pictures of eagles and mountain climbers, and they talk about teamwork and empowerment. They read better after a few scotches. That reminds you that you need to clear out your file cabinet to make room for the brandy. You and your team begin to code. But you rapidly discover that the design is lacking in some significant areas. Actually, it's lacking any significance at all. You convene a design session in one of the conference rooms to try to work through some of the nastier problems. But your boss catches you at it and disbands the meeting, saying, "The design phase is over. The only allowable activity is coding. Now get back to it."   ****   The code generated by Dandelion is really hideous. It turns out that you and your team were using association and aggregation the wrong way, after all. All the generated code has to be edited to correct these flaws. Editing this code is extremely difficult because it has been instrumented with ugly comment blocks that have special syntax that Dandelion needs in order to keep the diagrams in sync with the code. If you accidentally alter one of these comments, the diagrams will be regenerated incorrectly. It turns out that "Round the Horn Engineering" requires an awful lot of effort. The more you try to keep the code compatible with Dandelion, the more errors Dandelion generates. In the end, you give up and decide to keep the diagrams up to date manually. A second later, you decide that there's no point in keeping the diagrams up to date at all. Besides, who has time?   Your boss hires a consultant to build tools to count the number of lines of code that are being produced. He puts a big thermometer graph on the wall with the number 1,000,000 on the top. Every day, he extends the red line to show how many lines have been added. Three days after the thermometer appears on the wall, your boss stops you in the hall. "That graph isn't growing quickly enough. We need to have a million lines done by October 1." "We aren't even sh-sh-sure that the proshect will require a m-million linezh," you blather. "We have to have a million lines done by October 1," your boss reiterates. His points have grown again, and the Grecian formula he uses on them creates an aura of authority and competence. "Are you sure your comment blocks are big enough?" Then, in a flash of managerial insight, he says, "I have it! I want you to institute a new policy among the engineers. No line of code is to be longer than 20 characters. Any such line must be split into two or more preferably more. All existing code needs to be reworked to this standard. That'll get our line count up!"   You decide not to tell him that this will require two unscheduled work months. You decide not to tell him anything at all. You decide that intravenous injections of pure ethanol are the only solution. You make the appropriate arrangements. Hack, hack, hack, and hack. You and your team madly code away. By August 1, your boss, frowning at the thermometer on the wall, institutes a mandatory 50-hour workweek.   Hack, hack, hack, and hack. By September 1st, the thermometer is at 1.2 million lines and your boss asks you to write a report describing why you exceeded the coding budget by 20 percent. He institutes mandatory Saturdays and demands that the project be brought back down to a million lines. You start a campaign of remerging lines. Hack, hack, hack, and hack. Tempers are flaring; people are quitting; QA is raining trouble reports down on you. Customers are demanding installation and user manuals; salespeople are demanding advance demonstrations for special customers; the requirements document is still thrashing, the marketing folks are complaining that the product isn't anything like they specified, and the liquor store won't accept your credit card anymore. Something has to give.    On September 15, BB calls a meeting. As he enters the room, his points are emitting clouds of steam. When he speaks, the bass overtones of his carefully manicured voice cause the pit of your stomach to roll over. "The QA manager has told me that this project has less than 50 percent of the required features implemented. He has also informed me that the system crashes all the time, yields wrong results, and is hideously slow. He has also complained that he cannot keep up with the continuous train of daily releases, each more buggy than the last!" He stops for a few seconds, visibly trying to compose himself. "The QA manager estimates that, at this rate of development, we won't be able to ship the product until December!" Actually, you think it's more like March, but you don't say anything. "December!" BB roars with such derision that people duck their heads as though he were pointing an assault rifle at them. "December is absolutely out of the question. Team leaders, I want new estimates on my desk in the morning. I am hereby mandating 65-hour work weeks until this project is complete. And it better be complete by November 1."   As he leaves the conference room, he is heard to mutter: "Empowermentbah!" * * * Your boss is bald; his points are mounted on BB's wall. The fluorescent lights reflecting off his pate momentarily dazzle you. "Do you have anything to drink?" he asks. Having just finished your last bottle of Boone's Farm, you pull a bottle of Thunderbird from your bookshelf and pour it into his coffee mug. "What's it going to take to get this project done? " he asks. "We need to freeze the requirements, analyze them, design them, and then implement them," you say callously. "By November 1?" your boss exclaims incredulously. "No way! Just get back to coding the damned thing." He storms out, scratching his vacant head.   A few days later, you find that your boss has been transferred to the corporate research division. Turnover has skyrocketed. Customers, informed at the last minute that their orders cannot be fulfilled on time, have begun to cancel their orders. Marketing is re-evaluating whether this product aligns with the overall goals of the company. Memos fly, heads roll, policies change, and things are, overall, pretty grim. Finally, by March, after far too many sixty-five hour weeks, a very shaky version of the software is ready. In the field, bug-discovery rates are high, and the technical support staff are at their wits' end, trying to cope with the complaints and demands of the irate customers. Nobody is happy.   In April, BB decides to buy his way out of the problem by licensing a product produced by Rupert Industries and redistributing it. The customers are mollified, the marketing folks are smug, and you are laid off.     Rupert Industries: Project Alpha   Your name is Robert. The date is January 3, 2001. The quiet hours spent with your family this holiday have left you refreshed and ready for work. You are sitting in a conference room with your team of professionals. The manager of the division called the meeting. "We have some ideas for a new project," says the division manager. Call him Russ. He is a high-strung British chap with more energy than a fusion reactor. He is ambitious and driven but understands the value of a team. Russ describes the essence of the new market opportunity the company has identified and introduces you to Jane, the marketing manager, who is responsible for defining the products that will address it. Addressing you, Jane says, "We'd like to start defining our first product offering as soon as possible. When can you and your team meet with me?" You reply, "We'll be done with the current iteration of our project this Friday. We can spare a few hours for you between now and then. After that, we'll take a few people from the team and dedicate them to you. We'll begin hiring their replacements and the new people for your team immediately." "Great," says Russ, "but I want you to understand that it is critical that we have something to exhibit at the trade show coming up this July. If we can't be there with something significant, we'll lose the opportunity."   "I understand," you reply. "I don't yet know what it is that you have in mind, but I'm sure we can have something by July. I just can't tell you what that something will be right now. In any case, you and Jane are going to have complete control over what we developers do, so you can rest assured that by July, you'll have the most important things that can be accomplished in that time ready to exhibit."   Russ nods in satisfaction. He knows how this works. Your team has always kept him advised and allowed him to steer their development. He has the utmost confidence that your team will work on the most important things first and will produce a high-quality product.   * * *   "So, Robert," says Jane at their first meeting, "How does your team feel about being split up?" "We'll miss working with each other," you answer, "but some of us were getting pretty tired of that last project and are looking forward to a change. So, what are you people cooking up?" Jane beams. "You know how much trouble our customers currently have . . ." And she spends a half hour or so describing the problem and possible solution. "OK, wait a second" you respond. "I need to be clear about this." And so you and Jane talk about how this system might work. Some of her ideas aren't fully formed. You suggest possible solutions. She likes some of them. You continue discussing.   During the discussion, as each new topic is addressed, Jane writes user story cards. Each card represents something that the new system has to do. The cards accumulate on the table and are spread out in front of you. Both you and Jane point at them, pick them up, and make notes on them as you discuss the stories. The cards are powerful mnemonic devices that you can use to represent complex ideas that are barely formed.   At the end of the meeting, you say, "OK, I've got a general idea of what you want. I'm going to talk to the team about it. I imagine they'll want to run some experiments with various database structures and presentation formats. Next time we meet, it'll be as a group, and we'll start identifying the most important features of the system."   A week later, your nascent team meets with Jane. They spread the existing user story cards out on the table and begin to get into some of the details of the system. The meeting is very dynamic. Jane presents the stories in the order of their importance. There is much discussion about each one. The developers are concerned about keeping the stories small enough to estimate and test. So they continually ask Jane to split one story into several smaller stories. Jane is concerned that each story have a clear business value and priority, so as she splits them, she makes sure that this stays true.   The stories accumulate on the table. Jane writes them, but the developers make notes on them as needed. Nobody tries to capture everything that is said; the cards are not meant to capture everything but are simply reminders of the conversation.   As the developers become more comfortable with the stories, they begin writing estimates on them. These estimates are crude and budgetary, but they give Jane an idea of what the story will cost.   At the end of the meeting, it is clear that many more stories could be discussed. It is also clear that the most important stories have been addressed and that they represent several months worth of work. Jane closes the meeting by taking the cards with her and promising to have a proposal for the first release in the morning.   * * *   The next morning, you reconvene the meeting. Jane chooses five cards and places them on the table. "According to your estimates, these cards represent about one perfect team-week's worth of work. The last iteration of the previous project managed to get one perfect team-week done in 3 real weeks. If we can get these five stories done in 3 weeks, we'll be able to demonstrate them to Russ. That will make him feel very comfortable about our progress." Jane is pushing it. The sheepish look on her face lets you know that she knows it too. You reply, "Jane, this is a new team, working on a new project. It's a bit presumptuous to expect that our velocity will be the same as the previous team's. However, I met with the team yesterday afternoon, and we all agreed that our initial velocity should, in fact, be set to one perfectweek for every 3 real-weeks. So you've lucked out on this one." "Just remember," you continue, "that the story estimates and the story velocity are very tentative at this point. We'll learn more when we plan the iteration and even more when we implement it."   Jane looks over her glasses at you as if to say "Who's the boss around here, anyway?" and then smiles and says, "Yeah, don't worry. I know the drill by now."Jane then puts 15 more cards on the table. She says, "If we can get all these cards done by the end of March, we can turn the system over to our beta test customers. And we'll get good feedback from them."   You reply, "OK, so we've got our first iteration defined, and we have the stories for the next three iterations after that. These four iterations will make our first release."   "So," says Jane, can you really do these five stories in the next 3 weeks?" "I don't know for sure, Jane," you reply. "Let's break them down into tasks and see what we get."   So Jane, you, and your team spend the next several hours taking each of the five stories that Jane chose for the first iteration and breaking them down into small tasks. The developers quickly realize that some of the tasks can be shared between stories and that other tasks have commonalities that can probably be taken advantage of. It is clear that potential designs are popping into the developers' heads. From time to time, they form little discussion knots and scribble UML diagrams on some cards.   Soon, the whiteboard is filled with the tasks that, once completed, will implement the five stories for this iteration. You start the sign-up process by saying, "OK, let's sign up for these tasks." "I'll take the initial database generation." Says Pete. "That's what I did on the last project, and this doesn't look very different. I estimate it at two of my perfect workdays." "OK, well, then, I'll take the login screen," says Joe. "Aw, darn," says Elaine, the junior member of the team, "I've never done a GUI, and kinda wanted to try that one."   "Ah, the impatience of youth," Joe says sagely, with a wink in your direction. "You can assist me with it, young Jedi." To Jane: "I think it'll take me about three of my perfect workdays."   One by one, the developers sign up for tasks and estimate them in terms of their own perfect workdays. Both you and Jane know that it is best to let the developers volunteer for tasks than to assign the tasks to them. You also know full well that you daren't challenge any of the developers' estimates. You know these people, and you trust them. You know that they are going to do the very best they can.   The developers know that they can't sign up for more perfect workdays than they finished in the last iteration they worked on. Once each developer has filled his or her schedule for the iteration, they stop signing up for tasks.   Eventually, all the developers have stopped signing up for tasks. But, of course, tasks are still left on the board.   "I was worried that that might happen," you say, "OK, there's only one thing to do, Jane. We've got too much to do in this iteration. What stories or tasks can we remove?" Jane sighs. She knows that this is the only option. Working overtime at the beginning of a project is insane, and projects where she's tried it have not fared well.   So Jane starts to remove the least-important functionality. "Well, we really don't need the login screen just yet. We can simply start the system in the logged-in state." "Rats!" cries Elaine. "I really wanted to do that." "Patience, grasshopper." says Joe. "Those who wait for the bees to leave the hive will not have lips too swollen to relish the honey." Elaine looks confused. Everyone looks confused. "So . . .," Jane continues, "I think we can also do away with . . ." And so, bit by bit, the list of tasks shrinks. Developers who lose a task sign up for one of the remaining ones.   The negotiation is not painless. Several times, Jane exhibits obvious frustration and impatience. Once, when tensions are especially high, Elaine volunteers, "I'll work extra hard to make up some of the missing time." You are about to correct her when, fortunately, Joe looks her in the eye and says, "When once you proceed down the dark path, forever will it dominate your destiny."   In the end, an iteration acceptable to Jane is reached. It's not what Jane wanted. Indeed, it is significantly less. But it's something the team feels that can be achieved in the next 3 weeks.   And, after all, it still addresses the most important things that Jane wanted in the iteration. "So, Jane," you say when things had quieted down a bit, "when can we expect acceptance tests from you?" Jane sighs. This is the other side of the coin. For every story the development team implements,   Jane must supply a suite of acceptance tests that prove that it works. And the team needs these long before the end of the iteration, since they will certainly point out differences in the way Jane and the developers imagine the system's behaviour.   "I'll get you some example test scripts today," Jane promises. "I'll add to them every day after that. You'll have the entire suite by the middle of the iteration."   * * *   The iteration begins on Monday morning with a flurry of Class, Responsibilities, Collaborators sessions. By midmorning, all the developers have assembled into pairs and are rapidly coding away. "And now, my young apprentice," Joe says to Elaine, "you shall learn the mysteries of test-first design!"   "Wow, that sounds pretty rad," Elaine replies. "How do you do it?" Joe beams. It's clear that he has been anticipating this moment. "OK, what does the code do right now?" "Huh?" replied Elaine, "It doesn't do anything at all; there is no code."   "So, consider our task; can you think of something the code should do?" "Sure," Elaine said with youthful assurance, "First, it should connect to the database." "And thereupon, what must needs be required to connecteth the database?" "You sure talk weird," laughed Elaine. "I think we'd have to get the database object from some registry and call the Connect() method. "Ah, astute young wizard. Thou perceives correctly that we requireth an object within which we can cacheth the database object." "Is 'cacheth' really a word?" "It is when I say it! So, what test can we write that we know the database registry should pass?" Elaine sighs. She knows she'll just have to play along. "We should be able to create a database object and pass it to the registry in a Store() method. And then we should be able to pull it out of the registry with a Get() method and make sure it's the same object." "Oh, well said, my prepubescent sprite!" "Hay!" "So, now, let's write a test function that proves your case." "But shouldn't we write the database object and registry object first?" "Ah, you've much to learn, my young impatient one. Just write the test first." "But it won't even compile!" "Are you sure? What if it did?" "Uh . . ." "Just write the test, Elaine. Trust me." And so Joe, Elaine, and all the other developers began to code their tasks, one test case at a time. The room in which they worked was abuzz with the conversations between the pairs. The murmur was punctuated by an occasional high five when a pair managed to finish a task or a difficult test case.   As development proceeded, the developers changed partners once or twice a day. Each developer got to see what all the others were doing, and so knowledge of the code spread generally throughout the team.   Whenever a pair finished something significant whether a whole task or simply an important part of a task they integrated what they had with the rest of the system. Thus, the code base grew daily, and integration difficulties were minimized.   The developers communicated with Jane on a daily basis. They'd go to her whenever they had a question about the functionality of the system or the interpretation of an acceptance test case.   Jane, good as her word, supplied the team with a steady stream of acceptance test scripts. The team read these carefully and thereby gained a much better understanding of what Jane expected the system to do. By the beginning of the second week, there was enough functionality to demonstrate to Jane. She watched eagerly as the demonstration passed test case after test case. "This is really cool," Jane said as the demonstration finally ended. "But this doesn't seem like one-third of the tasks. Is your velocity slower than anticipated?"   You grimace. You'd been waiting for a good time to mention this to Jane but now she was forcing the issue. "Yes, unfortunately, we are going more slowly than we had expected. The new application server we are using is turning out to be a pain to configure. Also, it takes forever to reboot, and we have to reboot it whenever we make even the slightest change to its configuration."   Jane eyes you with suspicion. The stress of last Monday's negotiations had still not entirely dissipated. She says, "And what does this mean to our schedule? We can't slip it again, we just can't. Russ will have a fit! He'll haul us all into the woodshed and ream us some new ones."   You look Jane right in the eyes. There's no pleasant way to give someone news like this. So you just blurt out, "Look, if things keep going like they're going, we're not going to be done with everything by next Friday. Now it's possible that we'll figure out a way to go faster. But, frankly, I wouldn't depend on that. You should start thinking about one or two tasks that could be removed from the iteration without ruining the demonstration for Russ. Come hell or high water, we are going to give that demonstration on Friday, and I don't think you want us to choose which tasks to omit."   "Aw forchrisakes!" Jane barely manages to stifle yelling that last word as she stalks away, shaking her head. Not for the first time, you say to yourself, "Nobody ever promised me project management would be easy." You are pretty sure it won't be the last time, either.   Actually, things went a bit better than you had hoped. The team did, in fact, have to drop one task from the iteration, but Jane had chosen wisely, and the demonstration for Russ went without a hitch. Russ was not impressed with the progress, but neither was he dismayed. He simply said, "This is pretty good. But remember, we have to be able to demonstrate this system at the trade show in July, and at this rate, it doesn't look like you'll have all that much to show." Jane, whose attitude had improved dramatically with the completion of the iteration, responded to Russ by saying, "Russ, this team is working hard, and well. When July comes around, I am confident that we'll have something significant to demonstrate. It won't be everything, and some of it may be smoke and mirrors, but we'll have something."   Painful though the last iteration was, it had calibrated your velocity numbers. The next iteration went much better. Not because your team got more done than in the last iteration but simply because the team didn't have to remove any tasks or stories in the middle of the iteration.   By the start of the fourth iteration, a natural rhythm has been established. Jane, you, and the team know exactly what to expect from one another. The team is running hard, but the pace is sustainable. You are confident that the team can keep up this pace for a year or more.   The number of surprises in the schedule diminishes to near zero; however, the number of surprises in the requirements does not. Jane and Russ frequently look over the growing system and make recommendations or changes to the existing functionality. But all parties realize that these changes take time and must be scheduled. So the changes do not cause anyone's expectations to be violated. In March, there is a major demonstration of the system to the board of directors. The system is very limited and is not yet in a form good enough to take to the trade show, but progress is steady, and the board is reasonably impressed.   The second release goes even more smoothly than the first. By now, the team has figured out a way to automate Jane's acceptance test scripts. The team has also refactored the design of the system to the point that it is really easy to add new features and change old ones. The second release was done by the end of June and was taken to the trade show. It had less in it than Jane and Russ would have liked, but it did demonstrate the most important features of the system. Although customers at the trade show noticed that certain features were missing, they were very impressed overall. You, Russ, and Jane all returned from the trade show with smiles on your faces. You all felt as though this project was a winner.   Indeed, many months later, you are contacted by Rufus Inc. That company had been working on a system like this for its internal operations. Rufus has canceled the development of that system after a death-march project and is negotiating to license your technology for its environment.   Indeed, things are looking up!

    Read the article

  • Introduction to the ASP.NET Web API

    - by Stephen.Walther
    I am a huge fan of Ajax. If you want to create a great experience for the users of your website – regardless of whether you are building an ASP.NET MVC or an ASP.NET Web Forms site — then you need to use Ajax. Otherwise, you are just being cruel to your customers. We use Ajax extensively in several of the ASP.NET applications that my company, Superexpert.com, builds. We expose data from the server as JSON and use jQuery to retrieve and update that data from the browser. One challenge, when building an ASP.NET website, is deciding on which technology to use to expose JSON data from the server. For example, how do you expose a list of products from the server as JSON so you can retrieve the list of products with jQuery? You have a number of options (too many options) including ASMX Web services, WCF Web Services, ASHX Generic Handlers, WCF Data Services, and MVC controller actions. Fortunately, the world has just been simplified. With the release of ASP.NET 4 Beta, Microsoft has introduced a new technology for exposing JSON from the server named the ASP.NET Web API. You can use the ASP.NET Web API with both ASP.NET MVC and ASP.NET Web Forms applications. The goal of this blog post is to provide you with a brief overview of the features of the new ASP.NET Web API. You learn how to use the ASP.NET Web API to retrieve, insert, update, and delete database records with jQuery. We also discuss how you can perform form validation when using the Web API and use OData when using the Web API. Creating an ASP.NET Web API Controller The ASP.NET Web API exposes JSON data through a new type of controller called an API controller. You can add an API controller to an existing ASP.NET MVC 4 project through the standard Add Controller dialog box. Right-click your Controllers folder and select Add, Controller. In the dialog box, name your controller MovieController and select the Empty API controller template: A brand new API controller looks like this: using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { } } An API controller, unlike a standard MVC controller, derives from the base ApiController class instead of the base Controller class. Using jQuery to Retrieve, Insert, Update, and Delete Data Let’s create an Ajaxified Movie Database application. We’ll retrieve, insert, update, and delete movies using jQuery with the MovieController which we just created. Our Movie model class looks like this: namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } public string Title { get; set; } public string Director { get; set; } } } Our application will consist of a single HTML page named Movies.html. We’ll place all of our jQuery code in the Movies.html page. Getting a Single Record with the ASP.NET Web API To support retrieving a single movie from the server, we need to add a Get method to our API controller: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public Movie GetMovie(int id) { // Return movie by id if (id == 1) { return new Movie { Id = 1, Title = "Star Wars", Director = "Lucas" }; } // Otherwise, movie was not found throw new HttpResponseException(HttpStatusCode.NotFound); } } } In the code above, the GetMovie() method accepts the Id of a movie. If the Id has the value 1 then the method returns the movie Star Wars. Otherwise, the method throws an exception and returns 404 Not Found HTTP status code. After building your project, you can invoke the MovieController.GetMovie() method by entering the following URL in your web browser address bar: http://localhost:[port]/api/movie/1 (You’ll need to enter the correct randomly generated port). In the URL api/movie/1, the first “api” segment indicates that this is a Web API route. The “movie” segment indicates that the MovieController should be invoked. You do not specify the name of the action. Instead, the HTTP method used to make the request – GET, POST, PUT, DELETE — is used to identify the action to invoke. The ASP.NET Web API uses different routing conventions than normal ASP.NET MVC controllers. When you make an HTTP GET request then any API controller method with a name that starts with “GET” is invoked. So, we could have called our API controller action GetPopcorn() instead of GetMovie() and it would still be invoked by the URL api/movie/1. The default route for the Web API is defined in the Global.asax file and it looks like this: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); We can invoke our GetMovie() controller action with the jQuery code in the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Get Movie</title> </head> <body> <div> Title: <span id="title"></span> </div> <div> Director: <span id="director"></span> </div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> getMovie(1, function (movie) { $("#title").html(movie.Title); $("#director").html(movie.Director); }); function getMovie(id, callback) { $.ajax({ url: "/api/Movie", data: { id: id }, type: "GET", contentType: "application/json;charset=utf-8", statusCode: { 200: function (movie) { callback(movie); }, 404: function () { alert("Not Found!"); } } }); } </script> </body> </html> In the code above, the jQuery $.ajax() method is used to invoke the GetMovie() method. Notice that the Ajax call handles two HTTP response codes. When the GetMove() method successfully returns a movie, the method returns a 200 status code. In that case, the details of the movie are displayed in the HTML page. Otherwise, if the movie is not found, the GetMovie() method returns a 404 status code. In that case, the page simply displays an alert box indicating that the movie was not found (hopefully, you would implement something more graceful in an actual application). You can use your browser’s Developer Tools to see what is going on in the background when you open the HTML page (hit F12 in the most recent version of most browsers). For example, you can use the Network tab in Google Chrome to see the Ajax request which invokes the GetMovie() method: Getting a Set of Records with the ASP.NET Web API Let’s modify our Movie API controller so that it returns a collection of movies. The following Movie controller has a new ListMovies() method which returns a (hard-coded) collection of movies: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public IEnumerable<Movie> ListMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=1, Title="King Kong", Director="Jackson"}, new Movie {Id=1, Title="Memento", Director="Nolan"} }; } } } Because we named our action ListMovies(), the default Web API route will never match it. Therefore, we need to add the following custom route to our Global.asax file (at the top of the RegisterRoutes() method): routes.MapHttpRoute( name: "ActionApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); This route enables us to invoke the ListMovies() method with the URL /api/movie/listmovies. Now that we have exposed our collection of movies from the server, we can retrieve and display the list of movies using jQuery in our HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>List Movies</title> </head> <body> <div id="movies"></div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> listMovies(function (movies) { var strMovies=""; $.each(movies, function (index, movie) { strMovies += "<div>" + movie.Title + "</div>"; }); $("#movies").html(strMovies); }); function listMovies(callback) { $.ajax({ url: "/api/Movie/ListMovies", data: {}, type: "GET", contentType: "application/json;charset=utf-8", }).then(function(movies){ callback(movies); }); } </script> </body> </html>     Inserting a Record with the ASP.NET Web API Now let’s modify our Movie API controller so it supports creating new records: public HttpResponseMessage<Movie> PostMovie(Movie movieToCreate) { // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } The PostMovie() method in the code above accepts a movieToCreate parameter. We don’t actually store the new movie anywhere. In real life, you will want to call a service method to store the new movie in a database. When you create a new resource, such as a new movie, you should return the location of the new resource. In the code above, the URL where the new movie can be retrieved is assigned to the Location header returned in the PostMovie() response. Because the name of our method starts with “Post”, we don’t need to create a custom route. The PostMovie() method can be invoked with the URL /Movie/PostMovie – just as long as the method is invoked within the context of a HTTP POST request. The following HTML page invokes the PostMovie() method. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "Jackson" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }); function createMovie(movieToCreate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); } </script> </body> </html> This page creates a new movie (the Hobbit) by calling the createMovie() method. The page simply displays the Id of the new movie: The HTTP Post operation is performed with the following call to the jQuery $.ajax() method: $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); Notice that the type of Ajax request is a POST request. This is required to match the PostMovie() method. Notice, furthermore, that the new movie is converted into JSON using JSON.stringify(). The JSON.stringify() method takes a JavaScript object and converts it into a JSON string. Finally, notice that success is represented with a 201 status code. The HttpStatusCode.Created value returned from the PostMovie() method returns a 201 status code. Updating a Record with the ASP.NET Web API Here’s how we can modify the Movie API controller to support updating an existing record. In this case, we need to create a PUT method to handle an HTTP PUT request: public void PutMovie(Movie movieToUpdate) { if (movieToUpdate.Id == 1) { // Update the movie in the database return; } // If you can't find the movie to update throw new HttpResponseException(HttpStatusCode.NotFound); } Unlike our PostMovie() method, the PutMovie() method does not return a result. The action either updates the database or, if the movie cannot be found, returns an HTTP Status code of 404. The following HTML page illustrates how you can invoke the PutMovie() method: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Put Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToUpdate = { id: 1, title: "The Hobbit", director: "Jackson" }; updateMovie(movieToUpdate, function () { alert("Movie updated!"); }); function updateMovie(movieToUpdate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToUpdate), type: "PUT", contentType: "application/json;charset=utf-8", statusCode: { 200: function () { callback(); }, 404: function () { alert("Movie not found!"); } } }); } </script> </body> </html> Deleting a Record with the ASP.NET Web API Here’s the code for deleting a movie: public HttpResponseMessage DeleteMovie(int id) { // Delete the movie from the database // Return status code return new HttpResponseMessage(HttpStatusCode.NoContent); } This method simply deletes the movie (well, not really, but pretend that it does) and returns a No Content status code (204). The following page illustrates how you can invoke the DeleteMovie() action: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Delete Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> deleteMovie(1, function () { alert("Movie deleted!"); }); function deleteMovie(id, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify({id:id}), type: "DELETE", contentType: "application/json;charset=utf-8", statusCode: { 204: function () { callback(); } } }); } </script> </body> </html> Performing Validation How do you perform form validation when using the ASP.NET Web API? Because validation in ASP.NET MVC is driven by the Default Model Binder, and because the Web API uses the Default Model Binder, you get validation for free. Let’s modify our Movie class so it includes some of the standard validation attributes: using System.ComponentModel.DataAnnotations; namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } [Required(ErrorMessage="Title is required!")] [StringLength(5, ErrorMessage="Title cannot be more than 5 characters!")] public string Title { get; set; } [Required(ErrorMessage="Director is required!")] public string Director { get; set; } } } In the code above, the Required validation attribute is used to make both the Title and Director properties required. The StringLength attribute is used to require the length of the movie title to be no more than 5 characters. Now let’s modify our PostMovie() action to validate a movie before adding the movie to the database: public HttpResponseMessage PostMovie(Movie movieToCreate) { // Validate movie if (!ModelState.IsValid) { var errors = new JsonArray(); foreach (var prop in ModelState.Values) { if (prop.Errors.Any()) { errors.Add(prop.Errors.First().ErrorMessage); } } return new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } If ModelState.IsValid has the value false then the errors in model state are copied to a new JSON array. Each property – such as the Title and Director property — can have multiple errors. In the code above, only the first error message is copied over. The JSON array is returned with a Bad Request status code (400 status code). The following HTML page illustrates how you can invoke our modified PostMovie() action and display any error messages: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }, function (errors) { var strErrors = ""; $.each(errors, function(index, err) { strErrors += "*" + err + "\n"; }); alert(strErrors); } ); function createMovie(movieToCreate, success, fail) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { success(newMovie); }, 400: function (xhr) { var errors = JSON.parse(xhr.responseText); fail(errors); } } }); } </script> </body> </html> The createMovie() function performs an Ajax request and handles either a 201 or a 400 status code from the response. If a 201 status code is returned then there were no validation errors and the new movie was created. If, on the other hand, a 400 status code is returned then there was a validation error. The validation errors are retrieved from the XmlHttpRequest responseText property. The error messages are displayed in an alert: (Please don’t use JavaScript alert dialogs to display validation errors, I just did it this way out of pure laziness) This validation code in our PostMovie() method is pretty generic. There is nothing specific about this code to the PostMovie() method. In the following video, Jon Galloway demonstrates how to create a global Validation filter which can be used with any API controller action: http://www.asp.net/web-api/overview/web-api-routing-and-actions/video-custom-validation His validation filter looks like this: using System.Json; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http.Controllers; using System.Web.Http.Filters; namespace MyWebAPIApp.Filters { public class ValidationActionFilter:ActionFilterAttribute { public override void OnActionExecuting(HttpActionContext actionContext) { var modelState = actionContext.ModelState; if (!modelState.IsValid) { dynamic errors = new JsonObject(); foreach (var key in modelState.Keys) { var state = modelState[key]; if (state.Errors.Any()) { errors[key] = state.Errors.First().ErrorMessage; } } actionContext.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } } } } And you can register the validation filter in the Application_Start() method in the Global.asax file like this: GlobalConfiguration.Configuration.Filters.Add(new ValidationActionFilter()); After you register the Validation filter, validation error messages are returned from any API controller action method automatically when validation fails. You don’t need to add any special logic to any of your API controller actions to take advantage of the filter. Querying using OData The OData protocol is an open protocol created by Microsoft which enables you to perform queries over the web. The official website for OData is located here: http://odata.org For example, here are some of the query options which you can use with OData: · $orderby – Enables you to retrieve results in a certain order. · $top – Enables you to retrieve a certain number of results. · $skip – Enables you to skip over a certain number of results (use with $top for paging). · $filter – Enables you to filter the results returned. The ASP.NET Web API supports a subset of the OData protocol. You can use all of the query options listed above when interacting with an API controller. The only requirement is that the API controller action returns its data as IQueryable. For example, the following Movie controller has an action named GetMovies() which returns an IQueryable of movies: public IQueryable<Movie> GetMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=2, Title="King Kong", Director="Jackson"}, new Movie {Id=3, Title="Willow", Director="Lucas"}, new Movie {Id=4, Title="Shrek", Director="Smith"}, new Movie {Id=5, Title="Memento", Director="Nolan"} }.AsQueryable(); } If you enter the following URL in your browser: /api/movie?$top=2&$orderby=Title Then you will limit the movies returned to the top 2 in order of the movie Title. You will get the following results: By using the $top option in combination with the $skip option, you can enable client-side paging. For example, you can use $top and $skip to page through thousands of products, 10 products at a time. The $filter query option is very powerful. You can use this option to filter the results from a query. Here are some examples: Return every movie directed by Lucas: /api/movie?$filter=Director eq ‘Lucas’ Return every movie which has a title which starts with ‘S’: /api/movie?$filter=startswith(Title,’S') Return every movie which has an Id greater than 2: /api/movie?$filter=Id gt 2 The complete documentation for the $filter option is located here: http://www.odata.org/developers/protocols/uri-conventions#FilterSystemQueryOption Summary The goal of this blog entry was to provide you with an overview of the new ASP.NET Web API introduced with the Beta release of ASP.NET 4. In this post, I discussed how you can retrieve, insert, update, and delete data by using jQuery with the Web API. I also discussed how you can use the standard validation attributes with the Web API. You learned how to return validation error messages to the client and display the error messages using jQuery. Finally, we briefly discussed how the ASP.NET Web API supports the OData protocol. For example, you learned how to filter records returned from an API controller action by using the $filter query option. I’m excited about the new Web API. This is a feature which I expect to use with almost every ASP.NET application which I build in the future.

    Read the article

  • Introduction to the ASP.NET Web API

    - by Stephen.Walther
    I am a huge fan of Ajax. If you want to create a great experience for the users of your website – regardless of whether you are building an ASP.NET MVC or an ASP.NET Web Forms site — then you need to use Ajax. Otherwise, you are just being cruel to your customers. We use Ajax extensively in several of the ASP.NET applications that my company, Superexpert.com, builds. We expose data from the server as JSON and use jQuery to retrieve and update that data from the browser. One challenge, when building an ASP.NET website, is deciding on which technology to use to expose JSON data from the server. For example, how do you expose a list of products from the server as JSON so you can retrieve the list of products with jQuery? You have a number of options (too many options) including ASMX Web services, WCF Web Services, ASHX Generic Handlers, WCF Data Services, and MVC controller actions. Fortunately, the world has just been simplified. With the release of ASP.NET 4 Beta, Microsoft has introduced a new technology for exposing JSON from the server named the ASP.NET Web API. You can use the ASP.NET Web API with both ASP.NET MVC and ASP.NET Web Forms applications. The goal of this blog post is to provide you with a brief overview of the features of the new ASP.NET Web API. You learn how to use the ASP.NET Web API to retrieve, insert, update, and delete database records with jQuery. We also discuss how you can perform form validation when using the Web API and use OData when using the Web API. Creating an ASP.NET Web API Controller The ASP.NET Web API exposes JSON data through a new type of controller called an API controller. You can add an API controller to an existing ASP.NET MVC 4 project through the standard Add Controller dialog box. Right-click your Controllers folder and select Add, Controller. In the dialog box, name your controller MovieController and select the Empty API controller template: A brand new API controller looks like this: using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { } } An API controller, unlike a standard MVC controller, derives from the base ApiController class instead of the base Controller class. Using jQuery to Retrieve, Insert, Update, and Delete Data Let’s create an Ajaxified Movie Database application. We’ll retrieve, insert, update, and delete movies using jQuery with the MovieController which we just created. Our Movie model class looks like this: namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } public string Title { get; set; } public string Director { get; set; } } } Our application will consist of a single HTML page named Movies.html. We’ll place all of our jQuery code in the Movies.html page. Getting a Single Record with the ASP.NET Web API To support retrieving a single movie from the server, we need to add a Get method to our API controller: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public Movie GetMovie(int id) { // Return movie by id if (id == 1) { return new Movie { Id = 1, Title = "Star Wars", Director = "Lucas" }; } // Otherwise, movie was not found throw new HttpResponseException(HttpStatusCode.NotFound); } } } In the code above, the GetMovie() method accepts the Id of a movie. If the Id has the value 1 then the method returns the movie Star Wars. Otherwise, the method throws an exception and returns 404 Not Found HTTP status code. After building your project, you can invoke the MovieController.GetMovie() method by entering the following URL in your web browser address bar: http://localhost:[port]/api/movie/1 (You’ll need to enter the correct randomly generated port). In the URL api/movie/1, the first “api” segment indicates that this is a Web API route. The “movie” segment indicates that the MovieController should be invoked. You do not specify the name of the action. Instead, the HTTP method used to make the request – GET, POST, PUT, DELETE — is used to identify the action to invoke. The ASP.NET Web API uses different routing conventions than normal ASP.NET MVC controllers. When you make an HTTP GET request then any API controller method with a name that starts with “GET” is invoked. So, we could have called our API controller action GetPopcorn() instead of GetMovie() and it would still be invoked by the URL api/movie/1. The default route for the Web API is defined in the Global.asax file and it looks like this: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); We can invoke our GetMovie() controller action with the jQuery code in the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Get Movie</title> </head> <body> <div> Title: <span id="title"></span> </div> <div> Director: <span id="director"></span> </div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> getMovie(1, function (movie) { $("#title").html(movie.Title); $("#director").html(movie.Director); }); function getMovie(id, callback) { $.ajax({ url: "/api/Movie", data: { id: id }, type: "GET", contentType: "application/json;charset=utf-8", statusCode: { 200: function (movie) { callback(movie); }, 404: function () { alert("Not Found!"); } } }); } </script> </body> </html> In the code above, the jQuery $.ajax() method is used to invoke the GetMovie() method. Notice that the Ajax call handles two HTTP response codes. When the GetMove() method successfully returns a movie, the method returns a 200 status code. In that case, the details of the movie are displayed in the HTML page. Otherwise, if the movie is not found, the GetMovie() method returns a 404 status code. In that case, the page simply displays an alert box indicating that the movie was not found (hopefully, you would implement something more graceful in an actual application). You can use your browser’s Developer Tools to see what is going on in the background when you open the HTML page (hit F12 in the most recent version of most browsers). For example, you can use the Network tab in Google Chrome to see the Ajax request which invokes the GetMovie() method: Getting a Set of Records with the ASP.NET Web API Let’s modify our Movie API controller so that it returns a collection of movies. The following Movie controller has a new ListMovies() method which returns a (hard-coded) collection of movies: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public IEnumerable<Movie> ListMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=1, Title="King Kong", Director="Jackson"}, new Movie {Id=1, Title="Memento", Director="Nolan"} }; } } } Because we named our action ListMovies(), the default Web API route will never match it. Therefore, we need to add the following custom route to our Global.asax file (at the top of the RegisterRoutes() method): routes.MapHttpRoute( name: "ActionApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); This route enables us to invoke the ListMovies() method with the URL /api/movie/listmovies. Now that we have exposed our collection of movies from the server, we can retrieve and display the list of movies using jQuery in our HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>List Movies</title> </head> <body> <div id="movies"></div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> listMovies(function (movies) { var strMovies=""; $.each(movies, function (index, movie) { strMovies += "<div>" + movie.Title + "</div>"; }); $("#movies").html(strMovies); }); function listMovies(callback) { $.ajax({ url: "/api/Movie/ListMovies", data: {}, type: "GET", contentType: "application/json;charset=utf-8", }).then(function(movies){ callback(movies); }); } </script> </body> </html>     Inserting a Record with the ASP.NET Web API Now let’s modify our Movie API controller so it supports creating new records: public HttpResponseMessage<Movie> PostMovie(Movie movieToCreate) { // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } The PostMovie() method in the code above accepts a movieToCreate parameter. We don’t actually store the new movie anywhere. In real life, you will want to call a service method to store the new movie in a database. When you create a new resource, such as a new movie, you should return the location of the new resource. In the code above, the URL where the new movie can be retrieved is assigned to the Location header returned in the PostMovie() response. Because the name of our method starts with “Post”, we don’t need to create a custom route. The PostMovie() method can be invoked with the URL /Movie/PostMovie – just as long as the method is invoked within the context of a HTTP POST request. The following HTML page invokes the PostMovie() method. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "Jackson" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }); function createMovie(movieToCreate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); } </script> </body> </html> This page creates a new movie (the Hobbit) by calling the createMovie() method. The page simply displays the Id of the new movie: The HTTP Post operation is performed with the following call to the jQuery $.ajax() method: $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); Notice that the type of Ajax request is a POST request. This is required to match the PostMovie() method. Notice, furthermore, that the new movie is converted into JSON using JSON.stringify(). The JSON.stringify() method takes a JavaScript object and converts it into a JSON string. Finally, notice that success is represented with a 201 status code. The HttpStatusCode.Created value returned from the PostMovie() method returns a 201 status code. Updating a Record with the ASP.NET Web API Here’s how we can modify the Movie API controller to support updating an existing record. In this case, we need to create a PUT method to handle an HTTP PUT request: public void PutMovie(Movie movieToUpdate) { if (movieToUpdate.Id == 1) { // Update the movie in the database return; } // If you can't find the movie to update throw new HttpResponseException(HttpStatusCode.NotFound); } Unlike our PostMovie() method, the PutMovie() method does not return a result. The action either updates the database or, if the movie cannot be found, returns an HTTP Status code of 404. The following HTML page illustrates how you can invoke the PutMovie() method: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Put Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToUpdate = { id: 1, title: "The Hobbit", director: "Jackson" }; updateMovie(movieToUpdate, function () { alert("Movie updated!"); }); function updateMovie(movieToUpdate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToUpdate), type: "PUT", contentType: "application/json;charset=utf-8", statusCode: { 200: function () { callback(); }, 404: function () { alert("Movie not found!"); } } }); } </script> </body> </html> Deleting a Record with the ASP.NET Web API Here’s the code for deleting a movie: public HttpResponseMessage DeleteMovie(int id) { // Delete the movie from the database // Return status code return new HttpResponseMessage(HttpStatusCode.NoContent); } This method simply deletes the movie (well, not really, but pretend that it does) and returns a No Content status code (204). The following page illustrates how you can invoke the DeleteMovie() action: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Delete Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> deleteMovie(1, function () { alert("Movie deleted!"); }); function deleteMovie(id, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify({id:id}), type: "DELETE", contentType: "application/json;charset=utf-8", statusCode: { 204: function () { callback(); } } }); } </script> </body> </html> Performing Validation How do you perform form validation when using the ASP.NET Web API? Because validation in ASP.NET MVC is driven by the Default Model Binder, and because the Web API uses the Default Model Binder, you get validation for free. Let’s modify our Movie class so it includes some of the standard validation attributes: using System.ComponentModel.DataAnnotations; namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } [Required(ErrorMessage="Title is required!")] [StringLength(5, ErrorMessage="Title cannot be more than 5 characters!")] public string Title { get; set; } [Required(ErrorMessage="Director is required!")] public string Director { get; set; } } } In the code above, the Required validation attribute is used to make both the Title and Director properties required. The StringLength attribute is used to require the length of the movie title to be no more than 5 characters. Now let’s modify our PostMovie() action to validate a movie before adding the movie to the database: public HttpResponseMessage PostMovie(Movie movieToCreate) { // Validate movie if (!ModelState.IsValid) { var errors = new JsonArray(); foreach (var prop in ModelState.Values) { if (prop.Errors.Any()) { errors.Add(prop.Errors.First().ErrorMessage); } } return new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } If ModelState.IsValid has the value false then the errors in model state are copied to a new JSON array. Each property – such as the Title and Director property — can have multiple errors. In the code above, only the first error message is copied over. The JSON array is returned with a Bad Request status code (400 status code). The following HTML page illustrates how you can invoke our modified PostMovie() action and display any error messages: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }, function (errors) { var strErrors = ""; $.each(errors, function(index, err) { strErrors += "*" + err + "n"; }); alert(strErrors); } ); function createMovie(movieToCreate, success, fail) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { success(newMovie); }, 400: function (xhr) { var errors = JSON.parse(xhr.responseText); fail(errors); } } }); } </script> </body> </html> The createMovie() function performs an Ajax request and handles either a 201 or a 400 status code from the response. If a 201 status code is returned then there were no validation errors and the new movie was created. If, on the other hand, a 400 status code is returned then there was a validation error. The validation errors are retrieved from the XmlHttpRequest responseText property. The error messages are displayed in an alert: (Please don’t use JavaScript alert dialogs to display validation errors, I just did it this way out of pure laziness) This validation code in our PostMovie() method is pretty generic. There is nothing specific about this code to the PostMovie() method. In the following video, Jon Galloway demonstrates how to create a global Validation filter which can be used with any API controller action: http://www.asp.net/web-api/overview/web-api-routing-and-actions/video-custom-validation His validation filter looks like this: using System.Json; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http.Controllers; using System.Web.Http.Filters; namespace MyWebAPIApp.Filters { public class ValidationActionFilter:ActionFilterAttribute { public override void OnActionExecuting(HttpActionContext actionContext) { var modelState = actionContext.ModelState; if (!modelState.IsValid) { dynamic errors = new JsonObject(); foreach (var key in modelState.Keys) { var state = modelState[key]; if (state.Errors.Any()) { errors[key] = state.Errors.First().ErrorMessage; } } actionContext.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } } } } And you can register the validation filter in the Application_Start() method in the Global.asax file like this: GlobalConfiguration.Configuration.Filters.Add(new ValidationActionFilter()); After you register the Validation filter, validation error messages are returned from any API controller action method automatically when validation fails. You don’t need to add any special logic to any of your API controller actions to take advantage of the filter. Querying using OData The OData protocol is an open protocol created by Microsoft which enables you to perform queries over the web. The official website for OData is located here: http://odata.org For example, here are some of the query options which you can use with OData: · $orderby – Enables you to retrieve results in a certain order. · $top – Enables you to retrieve a certain number of results. · $skip – Enables you to skip over a certain number of results (use with $top for paging). · $filter – Enables you to filter the results returned. The ASP.NET Web API supports a subset of the OData protocol. You can use all of the query options listed above when interacting with an API controller. The only requirement is that the API controller action returns its data as IQueryable. For example, the following Movie controller has an action named GetMovies() which returns an IQueryable of movies: public IQueryable<Movie> GetMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=2, Title="King Kong", Director="Jackson"}, new Movie {Id=3, Title="Willow", Director="Lucas"}, new Movie {Id=4, Title="Shrek", Director="Smith"}, new Movie {Id=5, Title="Memento", Director="Nolan"} }.AsQueryable(); } If you enter the following URL in your browser: /api/movie?$top=2&$orderby=Title Then you will limit the movies returned to the top 2 in order of the movie Title. You will get the following results: By using the $top option in combination with the $skip option, you can enable client-side paging. For example, you can use $top and $skip to page through thousands of products, 10 products at a time. The $filter query option is very powerful. You can use this option to filter the results from a query. Here are some examples: Return every movie directed by Lucas: /api/movie?$filter=Director eq ‘Lucas’ Return every movie which has a title which starts with ‘S’: /api/movie?$filter=startswith(Title,’S') Return every movie which has an Id greater than 2: /api/movie?$filter=Id gt 2 The complete documentation for the $filter option is located here: http://www.odata.org/developers/protocols/uri-conventions#FilterSystemQueryOption Summary The goal of this blog entry was to provide you with an overview of the new ASP.NET Web API introduced with the Beta release of ASP.NET 4. In this post, I discussed how you can retrieve, insert, update, and delete data by using jQuery with the Web API. I also discussed how you can use the standard validation attributes with the Web API. You learned how to return validation error messages to the client and display the error messages using jQuery. Finally, we briefly discussed how the ASP.NET Web API supports the OData protocol. For example, you learned how to filter records returned from an API controller action by using the $filter query option. I’m excited about the new Web API. This is a feature which I expect to use with almost every ASP.NET application which I build in the future.

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • DHCP settings out of range Internet shuts off after a few minutes

    - by user263115
    I recently upgraded from windows eight to windows 8.1 I do not know if this has anything to do with anything I have a 64 bit OS. My Internet goes off by itself every 5 minutes even though my wireless icon at the lower right of the screen still shows connected I had an error message in the last event in it said that might DHCP settings were out of range. I get my internet at my house through a wireless portable hotspot through my smart phone. But i haven't ever had any problems before and i only have this problem on this network. If i turn airplane mode on and reset my network card, the internet will come back to life but soon die. i don't experience this problem while on a different network or if i'm on WiFi. This s really annoying please help Windows IP Configuration Host Name . . . . . . . . . . . . : NastyMcnastyJr Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : TeamViewer VPN Adapter Physical Address. . . . . . . . . : 00-FF-5D-13-26-21 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Wireless LAN adapter Local Area Connection* 11: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft Wi-Fi Direct Virtual Adapter Physical Address. . . . . . . . . : F6-B7-E2-50-09-38 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Wireless LAN adapter SAMMY McNASTY: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom 802.11n Network Adapter Physical Address. . . . . . . . . : F4-B7-E2-50-09-38 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::3107:66bc:cf1f:c776%4(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.43.3(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Friday, November 1, 2013 9:50:20 PM Lease Expires . . . . . . . . . . : Saturday, November 2, 2013 12:56:46 AM Default Gateway . . . . . . . . . : 192.168.43.1 DHCP Server . . . . . . . . . . . : 192.168.43.1 DHCPv6 IAID . . . . . . . . . . . : 83146722 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-19-F1-98-B4-20-89-84-84-61-BB DNS Servers . . . . . . . . . . . : 192.168.43.1 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter Ethernet: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetLink (TM) Gigabit Ethernet Physical Address. . . . . . . . . : 20-89-84-84-61-BB DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Log Name: System Source: Microsoft-Windows-UserPnp Date: 10/26/2013 7:52:23 PM Event ID: 20003 Task Category: (7005) Level: Information Keywords: User: SYSTEM Computer: NastyMcnastyJr Description: Driver Management has concluded the process to add Service vwifibus for Device Instance ID PCI\VEN_14E4&DEV_4727&SUBSYS_E042105B&REV_01\4&3265ADAB&0&00E1 with the following status: 0. Event Xml: 20003 0 4 7005 0 0x8000000000000000 5118 System NastyMcnastyJr vwifibus \SystemRoot\system32\DRIVERS\vwifibus.sys PCI\VEN_14E4&DEV_4727&SUBSYS_E042105B&REV_01\4&3265ADAB&0&00E1 false true 0 Log Name: System Source: Microsoft-Windows-UserPnp Date: 10/19/2013 3:29:12 PM Event ID: 20001 Task Category: (7005) Level: Information Keywords: User: SYSTEM Computer: NastyMcnastyJr Description: Driver Management concluded the process to install driver netbc64.inf_amd64_0df63b5297d0f820\netbc64.inf for Device Instance ID PCI\VEN_14E4&DEV_4727&SUBSYS_E042105B&REV_01\4&3265ADAB&0&00E1 with the following status: 0x0. Event Xml: 20001 0 4 7005 0 0x8000000000000000 2015 System NastyMcnastyJr netbc64.inf_amd64_0df63b5297d0f820\netbc64.inf 6.30.223.102 Microsoft PCI\VEN_14E4&DEV_4727&SUBSYS_E042105B&REV_01\4&3265ADAB&0&00E1 {4D36E972-E325-11CE-BFC1-08002BE10318} false false false 0x0 Broadcom 802.11n Network Adapter Log Name: System Source: Microsoft-Windows-DNS-Client Date: 11/2/2013 12:24:59 AM Event ID: 1014 Task Category: (1014) Level: Warning Keywords: (268435456) User: NETWORK SERVICE Computer: NastyMcnastyJr Description: Name resolution for the name www.google.com timed out after none of the configured DNS servers responded. Event Xml: 1014 0 3 1014 0 0x4000000010000000 34771 System NastyMcnastyJr www.google.com 128 02000000C0A82B01000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

    Read the article

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

  • Why is my router not routing?

    - by dwj
    Starting a week and half ago my router stopped working with my cable modem. I went to sleep with it working and woke up with it not. I swapped in another router and am still having issues; I was gone for 10 days so now I'm back to trying to figure it out. While I was gone I left everything (cable modem, router, and computer) powered off. My setup: Comcast Ambit cable modem (from Comcast) Netgear WGR614 v4 router -- replaced with Linksys WRT54GS v1.1 Windows XP SP3 other computers, all currently unplugged The modem is using the firmware (ver 2.105.2001) provided by Comcast; hardware version 1.3 The Linksys router is using FW ver 4.71.4 (latest for this release of HW), factory defaults I am only using the wired connections; no wireless. I have swapped out all of the cat5 cable. If I plug my computer directly into the cable modem, I can ping by name or number. Everything works perfectly. If I plug my computer into the router and the router into the modem, I cannot access anything outside of my local network. This is the exact setup I've used for the past 5 years; there were no changes in the past year. Now here's the interesting part: I can log into the Linksys router and get status information from it; everything appears good. Using the Diagnostics, I can run ping and traceroute to any site on the internet. These work perfectly. From my computer, I can ping the router and the modem. However, I cannot ping anything on the internet by with name or number. If I plug in another computer, I can ping it successfully. I've included two transcripts below that show these two attempts. Addresses, DNS, gateways, etc. look good. I cannot access the internet through either router. I am at a loss here. Suggestions? Help! Computer to Router to Cable Modem C:\ipconfig /renew Windows IP Configuration No operation can be performed on Bluetooth Network while it has its media disconnected. Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : hsd1.ca.comcast.net. IP Address. . . . . . . . . . . . : 192.168.1.100 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 Ethernet adapter Bluetooth Network: Media State . . . . . . . . . . . : Media disconnected C:\ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : wynton Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : hsd1.ca.comcast.net. Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : hsd1.ca.comcast.net. Description . . . . . . . . . . . : Intel(R) 82562V-2 10/100 Network Connection Physical Address. . . . . . . . . : 00-1D-09-9B-45-EB Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 192.168.1.100 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DNS Servers . . . . . . . . . . . : 68.87.76.178 68.87.78.130 Lease Obtained. . . . . . . . . . : Monday, March 22, 2010 10:21:55 PM Lease Expires . . . . . . . . . . : Tuesday, March 23, 2010 10:21:55 PM Ethernet adapter Bluetooth Network: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Bluetooth LAN Access Server Driver Physical Address. . . . . . . . . : 00-0A-3A-6F-68-41 C:\ping google.com Ping request could not find host google.com. Please check the name and try again . C:\ping 74.125.19.104 Pinging 74.125.19.104 with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for 74.125.19.104: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), C:\ Computer to Cable Modem Directly C:\ipconfig /renew Windows IP Configuration No operation can be performed on Bluetooth Network while it has its media disconnected. Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : hsd1.ca.comcast.net. IP Address. . . . . . . . . . . . : 71.204.149.195 Subnet Mask . . . . . . . . . . . : 255.255.252.0 Default Gateway . . . . . . . . . : 71.204.148.1 Ethernet adapter Bluetooth Network: Media State . . . . . . . . . . . : Media disconnected C:\ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : wynton Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : hsd1.ca.comcast.net. Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : hsd1.ca.comcast.net. Description . . . . . . . . . . . : Intel(R) 82562V-2 10/100 Network Connection Physical Address. . . . . . . . . : 00-1D-09-9B-45-EB Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 71.204.149.195 Subnet Mask . . . . . . . . . . . : 255.255.252.0 Default Gateway . . . . . . . . . : 71.204.148.1 DHCP Server . . . . . . . . . . . : 68.87.76.10 DNS Servers . . . . . . . . . . . : 68.87.76.178 68.87.78.130 Lease Obtained. . . . . . . . . . : Monday, March 22, 2010 10:18:50 PM Lease Expires . . . . . . . . . . : Monday, March 22, 2010 11:12:31 PM Ethernet adapter Bluetooth Network: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Bluetooth LAN Access Server Driver Physical Address. . . . . . . . . : 00-0A-3A-6F-68-41 C:\ping google.com Pinging google.com [74.125.19.99] with 32 bytes of data: Reply from 74.125.19.99: bytes=32 time=20ms TTL=55 Reply from 74.125.19.99: bytes=32 time=17ms TTL=55 Reply from 74.125.19.99: bytes=32 time=28ms TTL=55 Reply from 74.125.19.99: bytes=32 time=18ms TTL=55 Ping statistics for 74.125.19.99: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 17ms, Maximum = 28ms, Average = 20ms C:\ping 74.125.19.104 Pinging 74.125.19.104 with 32 bytes of data: Reply from 74.125.19.104: bytes=32 time=18ms TTL=55 Reply from 74.125.19.104: bytes=32 time=18ms TTL=55 Reply from 74.125.19.104: bytes=32 time=17ms TTL=55 Reply from 74.125.19.104: bytes=32 time=16ms TTL=55 Ping statistics for 74.125.19.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 16ms, Maximum = 18ms, Average = 17ms C:\

    Read the article

  • Ping "replies" from same computer with 'Destination host unreachable' (no route to other computer)

    - by Srekel
    I've got two computers in a LAN behind a wireless router. One has XP with ip 192.168.1.2 This one has W7 with ip 192.168.1.7 If I try to ping the other one from this computer, I get this: C:\Users\Srekel>ping 192.168.1.2 Pinging 192.168.1.2 with 32 bytes of data: Reply from 192.168.1.7: Destination host unreachable. Reply from 192.168.1.7: Destination host unreachable. Reply from 192.168.1.7: Destination host unreachable. Reply from 192.168.1.7: Destination host unreachable. Ping statistics for 192.168.1.2: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Tracert gives the same result: C:\Users\Srekel>tracert 192.168.1.2 Tracing route to 192.168.1.2 over a maximum of 30 hops 1 Kakburken4 [192.168.1.7] reports: Destination host unreachable. Trace complete. Although I can ping and tracert the router without any problems. I have disabled the firewalls on both computers. The router is set to use DHCP (if that matters). Here is the output from "route". C:\Users\Srekel>route print =========================================================================== Interface List 13...00 25 86 df c6 89 ......TP-LINK Wireless N Adapter 12...e0 cb 4e 26 b9 84 ......Realtek PCIe GBE Family Controller #2 11...e0 cb 4e 26 be 94 ......Realtek PCIe GBE Family Controller 1...........................Software Loopback Interface 1 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 14...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.7 20 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.1.0 255.255.255.0 On-link 192.168.1.7 276 192.168.1.7 255.255.255.255 On-link 192.168.1.7 276 192.168.1.255 255.255.255.255 On-link 192.168.1.7 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.7 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.7 276 =========================================================================== Persistent Routes: None IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 14 58 ::/0 On-link 1 306 ::1/128 On-link 14 58 2001::/32 On-link 14 306 2001:0:5ef5:73ba:881:20c1:3f57:fef8/128 On-link 14 306 fe80::/64 On-link 14 306 fe80::881:20c1:3f57:fef8/128 On-link 1 306 ff00::/8 On-link 14 306 ff00::/8 On-link =========================================================================== Persistent Routes: None I've set up and debugged a few networks in my life but I'm not really an advanced network user, so I'm not sure what might be wrong. Any ideas? Oh, and pinging this computer from the other computer doesn't work either. EDIT: Adding arp output: C:\Users\Srekel>arp -a Interface: 192.168.1.7 --- 0xd Internet Address Physical Address Type 192.168.1.1 00-1f-33-ef-28-01 dynamic 192.168.1.255 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 239.255.255.250 01-00-5e-7f-ff-fa static 255.255.255.255 ff-ff-ff-ff-ff-ff static Adding ipconfig... C:\Users\Srekel>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : Kakburken4 Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : TP-LINK Wireless N Adapter Physical Address. . . . . . . . . : 00-25-86-DF-C6-89 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.1.7(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 09 April 2010 23:09:45 Lease Expires . . . . . . . . . . : 10 April 2010 23:09:45 Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DNS Servers . . . . . . . . . . . : 192.168.1.1 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter Local Area Connection 2: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller #2 Physical Address. . . . . . . . . : E0-CB-4E-26-B9-84 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Ethernet adapter Local Area Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller Physical Address. . . . . . . . . : E0-CB-4E-26-BE-94 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Tunnel adapter isatap.{74D5C406-894E-4000-8DE7-6AAEBF7C8382}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft ISATAP Adapter #2 Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Tunnel adapter Teredo Tunneling Pseudo-Interface: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Teredo Tunneling Pseudo-Interface Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv6 Address. . . . . . . . . . . : 2001:0:5ef5:73ba:881:20c1:3f57:fef8(Preferred) Link-local IPv6 Address . . . . . : fe80::881:20c1:3f57:fef8%14(Preferred) Default Gateway . . . . . . . . . : :: NetBIOS over Tcpip. . . . . . . . : Disabled

    Read the article

< Previous Page | 256 257 258 259 260 261  | Next Page >