Search Results

Search found 23845 results on 954 pages for 'instance methods'.

Page 99/954 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • How to get instance of Boxy from within its contents?

    - by Chaaosh
    Hi How can i access the Instance of Boxy within its contents. I have a Link that loads a form in Boxy. Its displaying the boxy popup and its working fine. Now i want to access the instance of the boxy from form submit function. How can i do this. I have the code below. $('a.boxy').boxy({ modal:true, show:true, title:'&nbsp;', closeable:true, center:true, });//boxy link is loading the form <form name="subscribe" > name <input type="text" name="name" value="' /> <br /> email <input type="text" name="email" value="' /><br /> <input type="submit" name="submit" value="subscribe" onclick="submitform()" /> </form> <script> function submitform() { // here i want to access the boxy instance // also here i want to refer it and close it } </script> Kind suggest

    Read the article

  • New to threading in C#, can you make thread methods generic and what are the dangers?

    - by ibarczewski
    Hey all, I'm just now starting to get into the idea of threading, and wanted to know if I could make this more abstract. Both foo and bar derive methods from a base class, so I'd like to pass in one or the other and be able to do work using a method that was derived. I'd also like to know how you properly name threads and the methods inside threads. if (ChkFoo.Checked) { Thread fooThread = new Thread(new ThreadStart(this.ThreadedFooMethod)); fooThread.Start(); } if (ChkBar.Checked) { Thread barThread = new Thread(new ThreadStart(this.ThreadedBarMethod)); barThread.Start(); } . . . public void ThreadedFooMethod() { Foo newFoo = new Foo(); //Do work on newFoo } public void ThreadedBarMethod() { Bar newBar = new Bar(); //Do similar work } Thanks all!

    Read the article

  • Is there any difference in the implementation of these three validation methods?

    - by dontWatchMyProfile
    Core Data is calling these methods in certain situations: - (BOOL)validateForInsert:(NSError **)outError; - (BOOL)validateForUpdate:(NSError **)outError; - (BOOL)validateForDelete:(NSError **)outError; I wonder if they're doing anything different, or if they're essentially doing the exact same things. As far as I know, these methods call the -validateValue:forKey:error: method once for every property. The only difference I can imagine is in the .validateForDelete: method. I see no reason why to validate an object when it shall be deleted, except for applying delete rules, probably only in the case of the DENY rule.

    Read the article

  • Automatic Standby Recreation for Data Guard

    - by pablo.boixeda(at)oracle.com
    Hi,Unfortunately sometimes a Standby Instance needs to be recreated. This can happen for many reasons such as lost archive logs, standby data files, failover, among others.This is why we wanted to have one script to recreate standby instances in an easy way.This script recreates the standby considering some prereqs:-Database Version should be at least 11gR1-Dummy instance started on the standby node (Seeking to improve this so it won't be needed)-Broker configuration hasn't been removed-In our case we have two TNSNAMES files, one for the Standby creation (using SID) and the other one for production using service names (including broker service name)-Some environment variables set up by the environment db script (like ORACLE_HOME, PATH...)-The directory tree should not have been modified in the stanby hostWe are currently using it on our 11gR2 Data Guard tests.Any improvements will be welcome! Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} #!/bin/ksh ###    NOMBRE / VERSION ###       recrea_dg.sh   v.1.00 ### ###    DESCRIPCION ###       reacreacion de la Standby ### ###    DEVUELVE ###       0 Creacion de STANDBY correcta ###       1 Fallo ### ###    NOTAS ###       Este shell script NO DEBE MODIFICARSE. ###       Todas las variables y constantes necesarias se toman del entorno. ### ###    MODIFICADO POR:    FECHA:        COMENTARIOS: ###    ---------------    ----------    ------------------------------------- ###      Oracle           15/02/2011    Creacion. ### ### ### Cargar entorno ### V_ADMIN_DIR=`dirname $0` . ${V_ADMIN_DIR}/entorno_bd.sh 1>>/dev/null if [ $? -ne 0 ] then   echo "Error Loading the environment."   exit 1 fi V_RET=0 V_DATE=`/bin/date` V_DATE_F=`/bin/date +%Y%m%d_%H%M%S` V_LOGFILE=${V_TRAZAS}/recrea_dg_${V_DATE_F}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1 ### ### Variables para Recrear el Data Guard ### V_DB_BR=`echo ${V_DB_NAME}|tr '[:lower:]' '[:upper:]'` if [ "${ORACLE_SID}" = "${V_DB_NAME}01" ] then         V_LOCAL_BR=${V_DB_BR}'01'         V_REMOTE_BR=${V_DB_BR}'02' else         V_LOCAL_BR=${V_DB_BR}'02'         V_REMOTE_BR=${V_DB_BR}'01' fi echo " Getting local instance ROLE ${ORACLE_SID} ..." sqlplus -s /nolog 1>>/dev/null 2>&1 <<-! whenever sqlerror exit 1 connect / as sysdba variable salida number declare   v_database_role v\$database.database_role%type; begin   select database_role into v_database_role from v\$database;   :salida := case v_database_role        when 'PRIMARY' then 2        when 'PHYSICAL STANDBY' then 3        else 4      end; end; / exit :salida ! case $? in 1) echo " ERROR: Cannot get instance ROLE ." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; 2) echo " Local Instance with PRIMARY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=PRIMARY ;; 3) echo " Local Instance with PHYSICAL STANDBY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=STANDBY ;; *) echo " ERROR: UNKNOWN ROLE." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; esac if [ "${V_DB_ROLE_LCL}" = "PRIMARY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_REMOTE_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_LOCAL_BR}         V_STANDBY=${V_REMOTE_BR} fi if [ "${V_DB_ROLE_LCL}" = "STANDBY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_LOCAL_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_REMOTE_BR}         V_STANDBY=${V_LOCAL_BR} fi # Cargamos las variables de los hosts # Cargamos las variables de los hosts PRY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_PRIMARY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` SBY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` echo "el HOST primary es: ${PRY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "el HOST standby es: ${SBY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## V_DATE=`/bin/date` echo "${V_DATE} - Shutting down Standby instance" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## SBY_STATUS=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',status from v\\$instance; EOF` if [ ${SBY_STATUS} = 'STARTED' ] || [ ${SBY_STATUS} = 'MOUNTED' ] || [ ${SBY_STATUS} = 'OPEN' ] then         echo "${V_DATE} - Standby instance shutdown in progress..." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1         sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         shutdown abort         ! fi V_DATE=`/bin/date` echo "" echo "${V_DATE} - Standby instance stopped" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Eliminamos los ficheros de la base de datos ## V_SBY_SID=`echo ${V_STANDBY}|tr '[:upper:]' '[:lower:]'` V_PRY_SID=`echo ${V_PRIMARY}|tr '[:upper:]' '[:lower:]'` ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/data/*.dbf ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch/*.arc ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.rdo ## ## Startup nomount stby instance ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting  DUMMY Standby Instance " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${SBY_HOST} touch /home/oracle/init_dg.ora ssh ${SBY_HOST} 'echo "DB_NAME='${V_DB_NAME}'">>/home/oracle/init_dg.ora' ssh ${SBY_HOST} touch /home/oracle/start_dummy.sh ssh ${SBY_HOST} 'echo "ORACLE_HOME=/opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_HOME">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "PATH=\$ORACLE_HOME/bin:\$PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "ORACLE_SID='${V_SBY_SID}'">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_SID">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "sqlplus -s /nolog <<-!" >>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      whenever sqlerror exit 1 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      connect / as sysdba ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      startup nomount pfile='\''/home/oracle/init_dg.ora'\''">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "! ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'chmod 744 /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'sh /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/init_dg.ora' ## ## TNSNAMES change, specific for RMAN duplicate ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Setting up TNSNAMES in PRIMARY host " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.inst  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting STANDBY creation with RMAN.. " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 rman<<-! >>${V_LOGFILE} connect target sys/${V_DB_PWD}@${V_PRIMARY} connect auxiliary sys/${V_DB_PWD}@${V_STANDBY} run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database dorecover spfile parameter_value_convert '${V_PRY_SID}','${V_SBY_SID}' set control_files='/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/control01.ctl','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/control02.ctl' set db_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set log_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set 'db_unique_name'='${V_SBY_SID}' set log_archive_config='DG_CONFIG=(${V_PRIMARY},${V_STANDBY})' set fal_client='${V_STANDBY}' set fal_server='${V_PRIMARY}' set log_archive_dest_1='LOCATION=/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch DB_UNIQUE_NAME=${V_SBY_SID} MANDATORY VALID_FOR=(ALL_LOGFILES,ALL_ROLES)' set log_archive_dest_2='SERVICE="${V_PRIMARY}"','SYNC AFFIRM DB_UNIQUE_NAME=${V_PRY_SID} DELAY=0 MAX_FAILURE=0 REOPEN=300 REGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)' nofilenamecheck ; } ! V_DATE=`/bin/date` if [ $? -ne 0 ] then         echo ""         echo "${V_DATE} - Error creating STANDBY instance"         echo ""         echo "********************************************************************************" else         echo ""         echo "${V_DATE} - STANDBY instance created SUCCESSFULLY "         echo ""         echo "********************************************************************************" fi sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=${SBY_HOST})(PORT=1544))' scope=both;         alter system set service_names='${V_DB_NAME}.eu.roca.net,${V_SBY_SID}.eu.roca.net,${V_SBY_SID}_DGMGRL.eu.roca.net' scope=both;         alter database recover managed standby database using current logfile disconnect from session;         alter system set dg_broker_start=true scope=both; ! ## ## TNSNAMES change, back to Production Mode ## V_DATE=`/bin/date` echo " " | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Restoring TNSNAMES in PRIMARY "  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.prod  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} -  Waiting for media recovery before check the DATA GUARD Broker"  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 sleep 200 dgmgrl <<-! | grep SUCCESS 1>/dev/null 2>&1     connect ${V_DB_USR}/${V_DB_PWD}@${V_STANDBY}     show configuration verbose; ! if [ $? -ne 0 ] ; then         echo "       ERROR: El status del Broker no es SUCCESS" | tee -a ${V_LOGFILE}   2>&1 ;         V_RET=1 else          echo "      DATA GUARD OK " | tee -a ${V_LOGFILE}   2>&1 ; Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}         V_RET=0 fi Hope it helps.

    Read the article

  • Can't install graphic drivers in 12.04

    - by yinon
    The driver is ATI/AMD proprietary FGLRX graphics driver. After clicking Activate, it asks for my password and starts downloading. Then it shows an error message: 2012-10-03 16:16:04,227 DEBUG: updating <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> 2012-10-03 16:16:06,172 DEBUG: reading modalias file /lib/modules/3.2.0-29-generic-pae/modules.alias 2012-10-03 16:16:06,383 DEBUG: reading modalias file /usr/share/jockey/modaliases/b43 2012-10-03 16:16:06,386 DEBUG: reading modalias file /usr/share/jockey/modaliases/disable-upstream-nvidia 2012-10-03 16:16:06,456 DEBUG: loading custom handler /usr/share/jockey/handlers/pvr-omap4.py 2012-10-03 16:16:06,506 WARNING: modinfo for module omapdrm_pvr failed: ERROR: modinfo: could not find module omapdrm_pvr 2012-10-03 16:16:06,509 DEBUG: Instantiated Handler subclass __builtin__.PVROmap4Driver from name PVROmap4Driver 2012-10-03 16:16:06,682 DEBUG: PowerVR SGX proprietary graphics driver for OMAP 4 not available 2012-10-03 16:16:06,682 DEBUG: loading custom handler /usr/share/jockey/handlers/cdv.py 2012-10-03 16:16:06,727 WARNING: modinfo for module cedarview_gfx failed: ERROR: modinfo: could not find module cedarview_gfx 2012-10-03 16:16:06,728 DEBUG: Instantiated Handler subclass __builtin__.CdvDriver from name CdvDriver 2012-10-03 16:16:06,728 DEBUG: cdv.available: falling back to default 2012-10-03 16:16:06,772 DEBUG: Intel Cedarview graphics driver availability undetermined, adding to pool 2012-10-03 16:16:06,772 DEBUG: loading custom handler /usr/share/jockey/handlers/vmware-client.py 2012-10-03 16:16:06,781 WARNING: modinfo for module vmxnet failed: ERROR: modinfo: could not find module vmxnet 2012-10-03 16:16:06,781 DEBUG: Instantiated Handler subclass __builtin__.VmwareClientHandler from name VmwareClientHandler 2012-10-03 16:16:06,795 DEBUG: VMWare Client Tools availability undetermined, adding to pool 2012-10-03 16:16:06,796 DEBUG: loading custom handler /usr/share/jockey/handlers/fglrx.py 2012-10-03 16:16:06,801 WARNING: modinfo for module fglrx_updates failed: ERROR: modinfo: could not find module fglrx_updates 2012-10-03 16:16:06,805 DEBUG: Instantiated Handler subclass __builtin__.FglrxDriverUpdate from name FglrxDriverUpdate 2012-10-03 16:16:06,805 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:06,833 DEBUG: ATI/AMD proprietary FGLRX graphics driver (post-release updates) availability undetermined, adding to pool 2012-10-03 16:16:06,836 WARNING: modinfo for module fglrx failed: ERROR: modinfo: could not find module fglrx 2012-10-03 16:16:06,840 DEBUG: Instantiated Handler subclass __builtin__.FglrxDriver from name FglrxDriver 2012-10-03 16:16:06,840 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:06,873 DEBUG: ATI/AMD proprietary FGLRX graphics driver availability undetermined, adding to pool 2012-10-03 16:16:06,873 DEBUG: loading custom handler /usr/share/jockey/handlers/dvb_usb_firmware.py 2012-10-03 16:16:06,925 DEBUG: Instantiated Handler subclass __builtin__.DvbUsbFirmwareHandler from name DvbUsbFirmwareHandler 2012-10-03 16:16:06,926 DEBUG: Firmware for DVB cards not available 2012-10-03 16:16:06,926 DEBUG: loading custom handler /usr/share/jockey/handlers/nvidia.py 2012-10-03 16:16:06,961 WARNING: modinfo for module nvidia_96 failed: ERROR: modinfo: could not find module nvidia_96 2012-10-03 16:16:06,967 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver96 from name NvidiaDriver96 2012-10-03 16:16:06,968 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:06,980 DEBUG: XorgDriverHandler(nvidia_96, nvidia-96, None): Disabling as package video ABI xorg-video-abi-10 does not match X.org video ABI xorg-video-abi-11 2012-10-03 16:16:06,980 DEBUG: NVIDIA accelerated graphics driver not available 2012-10-03 16:16:06,983 WARNING: modinfo for module nvidia_current failed: ERROR: modinfo: could not find module nvidia_current 2012-10-03 16:16:06,987 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriverCurrent from name NvidiaDriverCurrent 2012-10-03 16:16:06,987 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,015 DEBUG: NVIDIA accelerated graphics driver availability undetermined, adding to pool 2012-10-03 16:16:07,018 WARNING: modinfo for module nvidia_current_updates failed: ERROR: modinfo: could not find module nvidia_current_updates 2012-10-03 16:16:07,021 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriverCurrentUpdates from name NvidiaDriverCurrentUpdates 2012-10-03 16:16:07,022 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,066 DEBUG: NVIDIA accelerated graphics driver (post-release updates) availability undetermined, adding to pool 2012-10-03 16:16:07,069 WARNING: modinfo for module nvidia_173_updates failed: ERROR: modinfo: could not find module nvidia_173_updates 2012-10-03 16:16:07,072 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver173Updates from name NvidiaDriver173Updates 2012-10-03 16:16:07,073 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,105 DEBUG: NVIDIA accelerated graphics driver (post-release updates) availability undetermined, adding to pool 2012-10-03 16:16:07,112 WARNING: modinfo for module nvidia_173 failed: ERROR: modinfo: could not find module nvidia_173 2012-10-03 16:16:07,118 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver173 from name NvidiaDriver173 2012-10-03 16:16:07,119 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,159 DEBUG: NVIDIA accelerated graphics driver availability undetermined, adding to pool 2012-10-03 16:16:07,166 WARNING: modinfo for module nvidia_96_updates failed: ERROR: modinfo: could not find module nvidia_96_updates 2012-10-03 16:16:07,171 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver96Updates from name NvidiaDriver96Updates 2012-10-03 16:16:07,171 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,188 DEBUG: XorgDriverHandler(nvidia_96_updates, nvidia-96-updates, None): Disabling as package video ABI xorg-video-abi-10 does not match X.org video ABI xorg-video-abi-11 2012-10-03 16:16:07,188 DEBUG: NVIDIA accelerated graphics driver (post-release updates) not available 2012-10-03 16:16:07,188 DEBUG: loading custom handler /usr/share/jockey/handlers/madwifi.py 2012-10-03 16:16:07,195 WARNING: modinfo for module ath_pci failed: ERROR: modinfo: could not find module ath_pci 2012-10-03 16:16:07,195 DEBUG: Instantiated Handler subclass __builtin__.MadwifiHandler from name MadwifiHandler 2012-10-03 16:16:07,196 DEBUG: Alternate Atheros "madwifi" driver availability undetermined, adding to pool 2012-10-03 16:16:07,196 DEBUG: loading custom handler /usr/share/jockey/handlers/sl_modem.py 2012-10-03 16:16:07,213 DEBUG: Instantiated Handler subclass __builtin__.SlModem from name SlModem 2012-10-03 16:16:07,234 DEBUG: Software modem not available 2012-10-03 16:16:07,234 DEBUG: loading custom handler /usr/share/jockey/handlers/broadcom_wl.py 2012-10-03 16:16:07,239 WARNING: modinfo for module wl failed: ERROR: modinfo: could not find module wl 2012-10-03 16:16:07,277 DEBUG: Instantiated Handler subclass __builtin__.BroadcomWLHandler from name BroadcomWLHandler 2012-10-03 16:16:07,277 DEBUG: Broadcom STA wireless driver availability undetermined, adding to pool 2012-10-03 16:16:07,278 DEBUG: all custom handlers loaded 2012-10-03 16:16:07,278 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000027D8sv00001043sd000082EAbc04sc03i00') 2012-10-03 16:16:07,568 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel'} 2012-10-03 16:16:07,699 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,699 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel'} 2012-10-03 16:16:07,699 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,699 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'input:b0000v0000p0000e0000-e0,5,kramlsfw6,') 2012-10-03 16:16:07,704 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'evbug'} 2012-10-03 16:16:07,704 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'evbug', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,704 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000027DAsv00001043sd00008179bc0Csc05i00') 2012-10-03 16:16:07,707 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'i2c_i801'} 2012-10-03 16:16:07,707 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'i2c_i801', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,707 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0C01:') 2012-10-03 16:16:07,707 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0B00:') 2012-10-03 16:16:07,707 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00001969d00001026sv00001043sd00008304bc02sc00i00') 2012-10-03 16:16:07,710 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'atl1e'} 2012-10-03 16:16:07,710 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'atl1e', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,710 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'input:b0003v04F2p0816e0111-e0,1,4,11,14,k71,72,73,74,75,77,79,7A,7B,7C,7D,7E,7F,80,81,82,83,84,85,86,87,88,89,8A,8C,8E,96,98,9E,9F,A1,A3,A4,A5,A6,AD,B0,B1,B2,B3,B4,B7,B8,B9,BA,BB,BC,BD,BE,BF,C0,C1,C2,F0,ram4,l0,1,2,sfw') 2012-10-03 16:16:07,711 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'evbug'} 2012-10-03 16:16:07,711 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'evbug', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,711 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid'} 2012-10-03 16:16:07,711 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,711 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'platform:pcspkr') 2012-10-03 16:16:07,711 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'pcspkr'} 2012-10-03 16:16:07,711 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'pcspkr', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,712 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_pcsp'} 2012-10-03 16:16:07,712 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_pcsp', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,712 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'usb:v1D6Bp0001d0302dc09dsc00dp00ic09isc00ip00') 2012-10-03 16:16:07,724 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'input:b0019v0000p0001e0000-e0,1,k74,ramlsfw') 2012-10-03 16:16:07,724 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'evbug'} 2012-10-03 16:16:07,724 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'evbug', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,724 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid'} 2012-10-03 16:16:07,724 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,724 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0C04:') 2012-10-03 16:16:07,724 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'platform:eisa') 2012-10-03 16:16:07,725 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000027CCsv00001043sd00008179bc0Csc03i20') 2012-10-03 16:16:07,728 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'platform:Fixed MDIO bus') 2012-10-03 16:16:07,728 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000029C0sv00001043sd000082B0bc06sc00i00') 2012-10-03 16:16:07,731 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'usb:v045Ep0766d0101dcEFdsc02dp01ic01isc01ip00') 2012-10-03 16:16:07,777 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_usb_audio'} 2012-10-03 16:16:07,777 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_usb_audio', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,777 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0F03:PNP0F13:') 2012-10-03 16:16:07,777 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0000:') 2012-10-03 16:16:07,777 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00001002d000095C5sv0000174Bsd0000E400bc03sc00i00') 2012-10-03 16:16:08,072 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'fglrx_updates', 'package': 'fglrx-updates'} 2012-10-03 16:16:08,133 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None 2012-10-03 16:16:08,134 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:16:08,072 DEBUG: found match in handler pool xorg:fglrx_updates([FglrxDriverUpdate, nonfree, disabled] ATI/AMD proprietary FGLRX graphics driver (post-release updates)) 2012-10-03 16:16:08,136 WARNING: modinfo for module fglrx_updates failed: ERROR: modinfo: could not find module fglrx_updates 2012-10-03 16:16:08,147 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:08,173 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None 2012-10-03 16:16:08,173 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:16:08,162 DEBUG: got handler xorg:fglrx_updates([FglrxDriverUpdate, nonfree, disabled] ATI/AMD proprietary FGLRX graphics driver (post-release updates)) 2012-10-03 16:16:08,173 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'fglrx', 'package': 'fglrx'} 2012-10-03 16:16:08,184 DEBUG: fglrx.enabled(fglrx): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None 2012-10-03 16:16:08,184 DEBUG: fglrx is not the alternative in use 2012-10-03 16:16:08,173 DEBUG: found match in handler pool xorg:fglrx([FglrxDriver, nonfree, disabled] ATI/AMD proprietary FGLRX graphics driver) 2012-10-03 16:16:08,187 WARNING: modinfo for module fglrx failed: ERROR: modinfo: could not find module fglrx 2012-10-03 16:16:08,190 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:08,216 DEBUG: fglrx.enabled(fglrx): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None . . . 2012-10-03 16:18:10,552 DEBUG: install progress initramfs-tools 62.500000 2012-10-03 16:18:22,249 DEBUG: install progress libc-bin 62.500000 2012-10-03 16:18:23,251 DEBUG: Selecting previously unselected package dkms. (Reading database ... 142496 files and directories currently installed.) Unpacking dkms (from .../dkms_2.2.0.3-1ubuntu3_all.deb) ... Selecting previously unselected package fakeroot. Unpacking fakeroot (from .../fakeroot_1.18.2-1_i386.deb) ... Selecting previously unselected package fglrx-updates. Unpacking fglrx-updates (from .../fglrx-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Selecting previously unselected package fglrx-amdcccle-updates. Unpacking fglrx-amdcccle-updates (from .../fglrx-amdcccle-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot dpkg: error processing libxss1 (--configure): package libxss1 is already installed and configured dpkg: error processing chromium-codecs-ffmpeg (--configure): package chromium-codecs-ffmpeg is already installed and configured dpkg: error processing chromium-browser (--configure): package chromium-browser is already installed and configured dpkg: error processing chromium-browser-l10n (--configure): package chromium-browser-l10n is already installed and configured Setting up dkms (2.2.0.3-1ubuntu3) ... No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Setting up fakeroot (1.18.2-1) ... update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode. Setting up fglrx-updates (2:8.960-0ubuntu1.1) ... update-alternatives: using /usr/lib/fglrx/ld.so.conf to provide /etc/ld.so.conf.d/i386-linux-gnu_GL.conf (i386-linux-gnu_gl_conf) in auto mode. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: using /usr/lib/fglrx/alt_ld.so.conf to provide /etc/ld.so.conf.d/x86_64-linux-gnu_GL.conf (x86_64-linux-gnu_gl_conf) in auto mode. update-initramfs: deferring update (trigger activated) Loading new fglrx-updates-8.960 DKMS files... First Installation: checking all kernels... Building only for 3.2.0-29-generic-pae Building for architecture i686 Building initial module for 3.2.0-29-generic-pae Done. fglrx_updates: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.2.0-29-generic-pae/updates/dkms/ depmod...... DKMS: install completed. update-initramfs: deferring update (trigger activated) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up fglrx-amdcccle-updates (2:8.960-0ubuntu1.1) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-29-generic-pae Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: libxss1 chromium-codecs-ffmpeg chromium-browser chromium-browser-l10n Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) 2012-10-03 16:18:23,256 ERROR: Package failed to install: Selecting previously unselected package dkms. (Reading database ... 142496 files and directories currently installed.) Unpacking dkms (from .../dkms_2.2.0.3-1ubuntu3_all.deb) ... Selecting previously unselected package fakeroot. Unpacking fakeroot (from .../fakeroot_1.18.2-1_i386.deb) ... Selecting previously unselected package fglrx-updates. Unpacking fglrx-updates (from .../fglrx-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Selecting previously unselected package fglrx-amdcccle-updates. Unpacking fglrx-amdcccle-updates (from .../fglrx-amdcccle-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot dpkg: error processing libxss1 (--configure): package libxss1 is already installed and configured dpkg: error processing chromium-codecs-ffmpeg (--configure): package chromium-codecs-ffmpeg is already installed and configured dpkg: error processing chromium-browser (--configure): package chromium-browser is already installed and configured dpkg: error processing chromium-browser-l10n (--configure): package chromium-browser-l10n is already installed and configured Setting up dkms (2.2.0.3-1ubuntu3) ... No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Setting up fakeroot (1.18.2-1) ... update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode. Setting up fglrx-updates (2:8.960-0ubuntu1.1) ... update-alternatives: using /usr/lib/fglrx/ld.so.conf to provide /etc/ld.so.conf.d/i386-linux-gnu_GL.conf (i386-linux-gnu_gl_conf) in auto mode. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: using /usr/lib/fglrx/alt_ld.so.conf to provide /etc/ld.so.conf.d/x86_64-linux-gnu_GL.conf (x86_64-linux-gnu_gl_conf) in auto mode. update-initramfs: deferring update (trigger activated) Loading new fglrx-updates-8.960 DKMS files... First Installation: checking all kernels... Building only for 3.2.0-29-generic-pae Building for architecture i686 Building initial module for 3.2.0-29-generic-pae Done. fglrx_updates: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.2.0-29-generic-pae/updates/dkms/ depmod...... DKMS: install completed. update-initramfs: deferring update (trigger activated) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up fglrx-amdcccle-updates (2:8.960-0ubuntu1.1) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-29-generic-pae Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: libxss1 chromium-codecs-ffmpeg chromium-browser chromium-browser-l10n Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) 2012-10-03 16:18:23,590 WARNING: /sys/module/fglrx_updates/drivers does not exist, cannot rebind fglrx_updates driver 2012-10-03 16:18:43,601 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:43,601 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:43,617 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:43,617 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,143 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,144 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,154 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,154 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,182 DEBUG: fglrx.enabled(fglrx): target_alt /usr/lib/fglrx/ld.so.conf current_alt /usr/lib/fglrx/ld.so.conf other target alt /usr/lib/fglrx/alt_ld.so.conf other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,182 DEBUG: XorgDriverHandler(%s, %s).enabled(): No X.org driver set, not checking 2012-10-03 16:18:54,215 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,215 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,229 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,229 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,268 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,268 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,279 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,279 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,298 DEBUG: fglrx.enabled(fglrx): target_alt /usr/lib/fglrx/ld.so.conf current_alt /usr/lib/fglrx/ld.so.conf other target alt /usr/lib/fglrx/alt_ld.so.conf other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,298 DEBUG: XorgDriverHandler(%s, %s).enabled(): No X.org driver set, not checking 2012-10-03 16:18:57,828 DEBUG: Shutting down I don't know how to troubleshoot from looking at the log file, could somebody assist me with this please? You can download the log file at: https://www.dropbox.com/s/a59d2hyabo02q5z/jockey.log

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • A solution for a PHP website without a framework

    - by lortabac
    One of our customers asked us to add some dynamic functionality to an existent website, made of several static HTML pages. We normally work with an MVC framework (mostly CodeIgniter), but in this case moving everything to a framework would require too much time. Since it is not a big project, not having the full functionality of a framework is not a problem. But the question is how to keep code clean. The solution I came up with is to divide code in libraries (the application's API) and models. So inside HTML there will only be API calls, and readability will not be sacrificed. I implemented this with a sort of static Registry (sorry if I'm wrong, I am not a design pattern expert): <?php class Custom_framework { //Global database instance private static $db; //Registered models private static $models = array(); //Registered libraries private static $libraries = array(); //Returns a database class instance static public function get_db(){ if(isset(self::$db)){ //If instance exists, returns it return self::$db; } else { //If instance doesn't exists, creates it self::$db = new DB; return self::$db; } } //Returns a model instance static public function get_model($model_name){ if(isset(self::$models[$model_name])){ //If instance exists, returns it return self::$models[$model_name]; } else { //If instance doesn't exists, creates it if(is_file(ROOT_DIR . 'application/models/' . $model_name . '.php')){ include_once ROOT_DIR . 'application/models/' . $model_name . '.php'; self::$models[$model_name] = new $model_name; return self::$models[$model_name]; } else { return FALSE; } } } //Returns a library instance static public function get_library($library_name){ if(isset(self::$libraries[$library_name])){ //If instance exists, returns it return self::$libraries[$library_name]; } else { //If instance doesn't exists, creates it if(is_file(ROOT_DIR . 'application/libraries/' . $library_name . '.php')){ include_once ROOT_DIR . 'application/libraries/' . $library_name . '.php'; self::$libraries[$library_name] = new $library_name; return self::$libraries[$library_name]; } else { return FALSE; } } } } Inside HTML, API methods are accessed like this: <?php echo Custom_framework::get_library('My_library')->my_method(); ?> It looks to me as a practical solution. But I wonder what its drawbacks are, and what the possible alternatives.

    Read the article

  • NHibernate: How is identity Id updated when saving a transient instance?

    - by bretddog
    If I use session-per-transaction and call: session.SaveOrUpdate(entity) corrected: session.SaveOrUpdateCopy(entity) ..and entity is a transient instance with identity-Id=0. Shall the above line automatically update the Id of the entity, and make the instance persistent? Or should it do so on transaction.Commit? Or do I have to somehow code that explicitly? Obviously the Id of the database row (new, since transient) is autogenerated and saved as some number, but I'm talking about the actual parameter instance here. Which is the business logic instance. EDIT Mappings: public class StoreMap : ClassMap<Store> { public StoreMap() { Id(x => x.Id).GeneratedBy.Identity(); Map(x => x.Name); HasMany(x => x.Staff) // 1:m .Cascade.All(); HasManyToMany(x => x.Products) // m:m .Cascade.All() .Table("StoreProduct"); } } public class EmployeeMap : ClassMap<Employee> { public EmployeeMap() { Id(x => x.Id).GeneratedBy.Identity(); Map(x => x.FirstName); Map(x => x.LastName); References(x => x.Store); // m:1 } } public class ProductMap : ClassMap<Product> { public ProductMap() { Id(x => x.Id).GeneratedBy.Identity(); Map(x => x.Name).Length(20); Map(x => x.Price).CustomSqlType("decimal").Precision(9).Scale(2); HasManyToMany(x => x.StoresStockedIn) .Cascade.All() .Inverse() .Table("StoreProduct"); } } EDIT2 Class definitions: public class Store { public int Id { get; private set; } public string Name { get; set; } public IList<Product> Products { get; set; } public IList<Employee> Staff { get; set; } public Store() { Products = new List<Product>(); Staff = new List<Employee>(); } // AddProduct & AddEmployee is required. "NH needs you to set both sides before // it will save correctly" public void AddProduct(Product product) { product.StoresStockedIn.Add(this); Products.Add(product); } public void AddEmployee(Employee employee) { employee.Store = this; Staff.Add(employee); } } public class Employee { public int Id { get; private set; } public string FirstName { get; set; } public string LastName { get; set; } public Store Store { get; set; } } public class Product { public int Id { get; private set; } public string Name { get; set; } public decimal Price { get; set; } public IList<Store> StoresStockedIn { get; private set; } }

    Read the article

  • Keeps "SSH timeout" error in our AWS instance- how do i diagnose?

    - by ming yeow
    I am befuddled by this error. We keep failing to SSH into our AWS instance, whether it is is deployment or via console. I have tried rebooting a few times, but it does not seem to be helping. Here are a couple of error messages i keep getting. connection failed for: HOST.NAME.amazonaws.com (Errno::ETIMEDOUT: Operation timed out - connect(2)) 111.222.333.444: ssh connection failed at 2010-07-02 03:39:37 I also SSHed in when it was up, and monitored "top" when ssh times out. looking at the memory logs, it does not look like any program was hogging

    Read the article

  • Auto-Attach EBS-volume to a New Spot Instance?

    - by Jeff
    I am experimenting with EC2 spot instances, and am needing some data to be retained between terminations. Now as I understand it, when the current price goes above my max. bid, it will be automatically terminated. I assume any init scripts I have will be run on shutdown so I can push data off to the EBS before unmounting. My question is, how can I automatically mount the same EBS volume on the new spot instance once the price goes down, since it won't have any of my init scripts that I would've loaded onto the root volume the first time? Do I have to create a custom AMI, or is there some other way to achieve this?

    Read the article

  • When to increase AWS RDS MySQL Server instance to larger CPU/RAM?

    - by rksprst
    I'm wondering at what stage do I need to increase the image for the RDS MySQL server to a larger CPU/RAM instance. The CPU utilization graph is near 0. The Avg Free Memory is around 150MB. The Avg Swap Usage is 420MB. Read Latency is 0-20ms/op it spikes up randomly. Avg write latency is on average 5ms/op but spikes up to 10-20ms/op. Are there some common rules here that I should follow? Thanks!

    Read the article

  • Can I have HTTPS and HTTP for a single instance of an application?

    - by Sivakanesh
    I'm planning a web application that will have its own server behind the corporate firewall. There will be two sets of users, internal and external to the organisation. Internal users will be located inside of the firewall as same as the application server and the external users are outside over the internet. All users will be authenticated via a login by the web application. I would like a setup where the external users will be required to access whole of the application using SSL and the internal users via standard http connection. I would like to know, if it is possible to setup a single instance the application so that it can be accessed via SSL for external (over the internet) users AND over http for internal users? Thanks

    Read the article

  • How do I point one virtual host to another instance of apache running at another port on the same bo

    - by sacamano
    Hi there. I've got two apache2 instances running on my box. One came with a bitnami redmine stack which sole purpose is to host Redmine at host:8080/redmine. The other apache instance is running with php and such and is where I specify all the VHosts for my domains. Now I'd like to point redmine.somedomain.com at www.somedomain.com:8080/redmine so that redmine is accessible through a subdomain and on port 80. Redmine is a Ruby on Rails app and runs with Phusion Passenger so I can't just point the vhost to the htdocs directory of the redmine install. How is this done? I've tinkered with ProxyPass and ProxyPassReverse but I just can't get it working. All help is greatly appreciated.

    Read the article

  • Backup database from default (unnamed) sql server instance with powershell.

    - by sparks
    Trying to connect to an instance of SQL Server 2008 on a server we'll call Sputnik. There are no firewalls in between the two devices. Right now I'm just trying to list databases [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null $servername = "Sputnik" $remoteServer = New-Object("Microsoft.SqlServer.Management.Smo.Server") $servername $remoteServer.databases The following error message occurs: The following exception was thrown when trying to enumerate the collection: "Failed to connect to server Sputnik.". At line:1 char:15 + $remoteServer. <<<< databases + CategoryInfo : NotSpecified: (:) [], ExtendedTypeSystemException + FullyQualifiedErrorId : ExceptionInGetEnumerator

    Read the article

  • ServerName no longer works as SQL Server 2008 Default instance name?

    - by TomK
    Folks, Somehow in struggling to install something (Sage MIP), I have unsettled my system's SQL Server 2008 instance names. If I fired up SSMS and connected to "MySystemName" (while logged into MySystemName), I used to immediately connect and could view my databases. Now if I connect to "(local)" or "." I connect just fine...but "MySystemName" gives me a timeout error. If I log in with "." and do "SELECT @@SERVERNAME" it comes back with "MySystemName" just fine. I've double checked that my network name is in the Security\Logins list, and it is, as a dbadmin. Any suggestions? I thought that having SQL Server 2005 Express might be a problem (the "battle of the default instances"), so I uninstalled that. No better.

    Read the article

  • I setup vsftpd on ubuntu server on my ec2 instance, how to connect using SSH?

    - by Blankman
    I connect to my ec2 instance using ssh so I don't have to login each time. I just installed vsftpd on the ubuntu server, but when I connect it obviously asks for my username and password. Since I connect using the ubuntu user that my AMI comes with, I don't even know the root password. Is there a way I can login via ftp using SSH? Or do I just create a user on the system for ftp purposes? I've locked ftp to my IP address, and I will shutdown the ftp service once I'm done as I dont need it running 99.99999% of the time.

    Read the article

  • How to identify which website on my instance is receiving lots of traffic?

    - by Bob Flemming
    I am new to server administration and have just setup a new quad core instance which hosts around 15 websites. Over the past couple of days my server load has been averaging at around 15.00. I believe it is because of one (or maybe more) websites are getting spammed by spambots. Typing 'top' at the command line shows many processes from user 'www-data' which indicates lots of web traffic. Is there an easy way identify which one of my sites is taking a hammering? Reading the apache error logs is a very difficult tasks as most of the websites receive daily traffic of 10,000 + unique users. Any help would be appreciated!

    Read the article

  • Is it safe to sync a Firefox instance in both directions?

    - by java.is.for.desktop
    Hello, everyone! I want to use some tool (not decided yet, which one) to sync a Firefox instance (more exact: user directory) between two machines. (EDIT: I want really to sync everything in the user-directory) Would you assume, that this is safe? What would happen, if some files are newer on the one machine, and some other files are newer on the other machine? Could this lead to inconsistencies, when, let's say, there are some inter-file references? If general, I don't have good experience with syncing application in both directions. Most applications seem not to be suited for this.

    Read the article

  • Is it possible as an Administrator to gain access to a SQL Server 2008 instance without changing any passwords?

    - by adhocgeek
    I have administrative access on our network, but I don't manage the installation of all servers or software. On some of our machines instances of SQL Server 2008 have been installed which I need to be able to access, but since my account hasn't been explicitly granted a login, I can't get into. Is there a way to get into the database without changing anyone's password (e.g. I could solve this by changing the password of the user who installed the instance, assuming they've set themselves up as admin, and then logging on as them, but I don't want to have to do this).

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • how reference copy is handled in Objective-C?

    - by Cathy
    Object graph [Instance A] tree / \ / \ / \ ↓ ↓ [Instance B] [Instance C] apple bug Question Instance A has to reference copies to Instance B and Instance C. If I retain or release Instance A, which has references to the other two instances, what happens to the various reference counts?

    Read the article

  • An Introduction to ASP.NET Web API

    - by Rick Strahl
    Microsoft recently released ASP.NET MVC 4.0 and .NET 4.5 and along with it, the brand spanking new ASP.NET Web API. Web API is an exciting new addition to the ASP.NET stack that provides a new, well-designed HTTP framework for creating REST and AJAX APIs (API is Microsoft’s new jargon for a service, in case you’re wondering). Although Web API ships and installs with ASP.NET MVC 4, you can use Web API functionality in any ASP.NET project, including WebForms, WebPages and MVC or just a Web API by itself. And you can also self-host Web API in your own applications from Console, Desktop or Service applications. If you're interested in a high level overview on what ASP.NET Web API is and how it fits into the ASP.NET stack you can check out my previous post: Where does ASP.NET Web API fit? In the following article, I'll focus on a practical, by example introduction to ASP.NET Web API. All the code discussed in this article is available in GitHub: https://github.com/RickStrahl/AspNetWebApiArticle [republished from my Code Magazine Article and updated for RTM release of ASP.NET Web API] Getting Started To start I’ll create a new empty ASP.NET application to demonstrate that Web API can work with any kind of ASP.NET project. Although you can create a new project based on the ASP.NET MVC/Web API template to quickly get up and running, I’ll take you through the manual setup process, because one common use case is to add Web API functionality to an existing ASP.NET application. This process describes the steps needed to hook up Web API to any ASP.NET 4.0 application. Start by creating an ASP.NET Empty Project. Then create a new folder in the project called Controllers. Add a Web API Controller Class Once you have any kind of ASP.NET project open, you can add a Web API Controller class to it. Web API Controllers are very similar to MVC Controller classes, but they work in any kind of project. Add a new item to this folder by using the Add New Item option in Visual Studio and choose Web API Controller Class, as shown in Figure 1. Figure 1: This is how you create a new Controller Class in Visual Studio   Make sure that the name of the controller class includes Controller at the end of it, which is required in order for Web API routing to find it. Here, the name for the class is AlbumApiController. For this example, I’ll use a Music Album model to demonstrate basic behavior of Web API. The model consists of albums and related songs where an album has properties like Name, Artist and YearReleased and a list of songs with a SongName and SongLength as well as an AlbumId that links it to the album. You can find the code for the model (and the rest of these samples) on Github. To add the file manually, create a new folder called Model, and add a new class Album.cs and copy the code into it. There’s a static AlbumData class with a static CreateSampleAlbumData() method that creates a short list of albums on a static .Current that I’ll use for the examples. Before we look at what goes into the controller class though, let’s hook up routing so we can access this new controller. Hooking up Routing in Global.asax To start, I need to perform the one required configuration task in order for Web API to work: I need to configure routing to the controller. Like MVC, Web API uses routing to provide clean, extension-less URLs to controller methods. Using an extension method to ASP.NET’s static RouteTable class, you can use the MapHttpRoute() (in the System.Web.Http namespace) method to hook-up the routing during Application_Start in global.asax.cs shown in Listing 1.using System; using System.Web.Routing; using System.Web.Http; namespace AspNetWebApi { public class Global : System.Web.HttpApplication { protected void Application_Start(object sender, EventArgs e) { RouteTable.Routes.MapHttpRoute( name: "AlbumVerbs", routeTemplate: "albums/{title}", defaults: new { symbol = RouteParameter.Optional, controller="AlbumApi" } ); } } } This route configures Web API to direct URLs that start with an albums folder to the AlbumApiController class. Routing in ASP.NET is used to create extensionless URLs and allows you to map segments of the URL to specific Route Value parameters. A route parameter, with a name inside curly brackets like {name}, is mapped to parameters on the controller methods. Route parameters can be optional, and there are two special route parameters – controller and action – that determine the controller to call and the method to activate respectively. HTTP Verb Routing Routing in Web API can route requests by HTTP Verb in addition to standard {controller},{action} routing. For the first examples, I use HTTP Verb routing, as shown Listing 1. Notice that the route I’ve defined does not include an {action} route value or action value in the defaults. Rather, Web API can use the HTTP Verb in this route to determine the method to call the controller, and a GET request maps to any method that starts with Get. So methods called Get() or GetAlbums() are matched by a GET request and a POST request maps to a Post() or PostAlbum(). Web API matches a method by name and parameter signature to match a route, query string or POST values. In lieu of the method name, the [HttpGet,HttpPost,HttpPut,HttpDelete, etc] attributes can also be used to designate the accepted verbs explicitly if you don’t want to follow the verb naming conventions. Although HTTP Verb routing is a good practice for REST style resource APIs, it’s not required and you can still use more traditional routes with an explicit {action} route parameter. When {action} is supplied, the HTTP verb routing is ignored. I’ll talk more about alternate routes later. When you’re finished with initial creation of files, your project should look like Figure 2.   Figure 2: The initial project has the new API Controller Album model   Creating a small Album Model Now it’s time to create some controller methods to serve data. For these examples, I’ll use a very simple Album and Songs model to play with, as shown in Listing 2. public class Song { public string AlbumId { get; set; } [Required, StringLength(80)] public string SongName { get; set; } [StringLength(5)] public string SongLength { get; set; } } public class Album { public string Id { get; set; } [Required, StringLength(80)] public string AlbumName { get; set; } [StringLength(80)] public string Artist { get; set; } public int YearReleased { get; set; } public DateTime Entered { get; set; } [StringLength(150)] public string AlbumImageUrl { get; set; } [StringLength(200)] public string AmazonUrl { get; set; } public virtual List<Song> Songs { get; set; } public Album() { Songs = new List<Song>(); Entered = DateTime.Now; // Poor man's unique Id off GUID hash Id = Guid.NewGuid().GetHashCode().ToString("x"); } public void AddSong(string songName, string songLength = null) { this.Songs.Add(new Song() { AlbumId = this.Id, SongName = songName, SongLength = songLength }); } } Once the model has been created, I also added an AlbumData class that generates some static data in memory that is loaded onto a static .Current member. The signature of this class looks like this and that's what I'll access to retrieve the base data:public static class AlbumData { // sample data - static list public static List<Album> Current = CreateSampleAlbumData(); /// <summary> /// Create some sample data /// </summary> /// <returns></returns> public static List<Album> CreateSampleAlbumData() { … }} You can check out the full code for the data generation online. Creating an AlbumApiController Web API shares many concepts of ASP.NET MVC, and the implementation of your API logic is done by implementing a subclass of the System.Web.Http.ApiController class. Each public method in the implemented controller is a potential endpoint for the HTTP API, as long as a matching route can be found to invoke it. The class name you create should end in Controller, which is how Web API matches the controller route value to figure out which class to invoke. Inside the controller you can implement methods that take standard .NET input parameters and return .NET values as results. Web API’s binding tries to match POST data, route values, form values or query string values to your parameters. Because the controller is configured for HTTP Verb based routing (no {action} parameter in the route), any methods that start with Getxxxx() are called by an HTTP GET operation. You can have multiple methods that match each HTTP Verb as long as the parameter signatures are different and can be matched by Web API. In Listing 3, I create an AlbumApiController with two methods to retrieve a list of albums and a single album by its title .public class AlbumApiController : ApiController { public IEnumerable<Album> GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); return albums; } public Album GetAlbum(string title) { var album = AlbumData.Current .SingleOrDefault(alb => alb.AlbumName.Contains(title)); return album; }} To access the first two requests, you can use the following URLs in your browser: http://localhost/aspnetWebApi/albumshttp://localhost/aspnetWebApi/albums/Dirty%20Deeds Note that you’re not specifying the actions of GetAlbum or GetAlbums in these URLs. Instead Web API’s routing uses HTTP GET verb to route to these methods that start with Getxxx() with the first mapping to the parameterless GetAlbums() method and the latter to the GetAlbum(title) method that receives the title parameter mapped as optional in the route. Content Negotiation When you access any of the URLs above from a browser, you get either an XML or JSON result returned back. The album list result for Chrome 17 and Internet Explorer 9 is shown Figure 3. Figure 3: Web API responses can vary depending on the browser used, demonstrating Content Negotiation in action as these two browsers send different HTTP Accept headers.   Notice that the results are not the same: Chrome returns an XML response and IE9 returns a JSON response. Whoa, what’s going on here? Shouldn’t we see the same result in both browsers? Actually, no. Web API determines what type of content to return based on Accept headers. HTTP clients, like browsers, use Accept headers to specify what kind of content they’d like to see returned. Browsers generally ask for HTML first, followed by a few additional content types. Chrome (and most other major browsers) ask for: Accept: text/html, application/xhtml+xml,application/xml; q=0.9,*/*;q=0.8 IE9 asks for: Accept: text/html, application/xhtml+xml, */* Note that Chrome’s Accept header includes application/xml, which Web API finds in its list of supported media types and returns an XML response. IE9 does not include an Accept header type that works on Web API by default, and so it returns the default format, which is JSON. This is an important and very useful feature that was missing from any previous Microsoft REST tools: Web API automatically switches output formats based on HTTP Accept headers. Nowhere in the server code above do you have to explicitly specify the output format. Rather, Web API determines what format the client is requesting based on the Accept headers and automatically returns the result based on the available formatters. This means that a single method can handle both XML and JSON results.. Using this simple approach makes it very easy to create a single controller method that can return JSON, XML, ATOM or even OData feeds by providing the appropriate Accept header from the client. By default you don’t have to worry about the output format in your code. Note that you can still specify an explicit output format if you choose, either globally by overriding the installed formatters, or individually by returning a lower level HttpResponseMessage instance and setting the formatter explicitly. More on that in a minute. Along the same lines, any content sent to the server via POST/PUT is parsed by Web API based on the HTTP Content-type of the data sent. The same formats allowed for output are also allowed on input. Again, you don’t have to do anything in your code – Web API automatically performs the deserialization from the content. Accessing Web API JSON Data with jQuery A very common scenario for Web API endpoints is to retrieve data for AJAX calls from the Web browser. Because JSON is the default format for Web API, it’s easy to access data from the server using jQuery and its getJSON() method. This example receives the albums array from GetAlbums() and databinds it into the page using knockout.js.$.getJSON("albums/", function (albums) { // make knockout template visible $(".album").show(); // create view object and attach array var view = { albums: albums }; ko.applyBindings(view); }); Figure 4 shows this and the next example’s HTML output. You can check out the complete HTML and script code at http://goo.gl/Ix33C (.html) and http://goo.gl/tETlg (.js). Figu Figure 4: The Album Display sample uses JSON data loaded from Web API.   The result from the getJSON() call is a JavaScript object of the server result, which comes back as a JavaScript array. In the code, I use knockout.js to bind this array into the UI, which as you can see, requires very little code, instead using knockout’s data-bind attributes to bind server data to the UI. Of course, this is just one way to use the data – it’s entirely up to you to decide what to do with the data in your client code. Along the same lines, I can retrieve a single album to display when the user clicks on an album. The response returns the album information and a child array with all the songs. The code to do this is very similar to the last example where we pulled the albums array:$(".albumlink").live("click", function () { var id = $(this).data("id"); // title $.getJSON("albums/" + id, function (album) { ko.applyBindings(album, $("#divAlbumDialog")[0]); $("#divAlbumDialog").show(); }); }); Here the URL looks like this: /albums/Dirty%20Deeds, where the title is the ID captured from the clicked element’s data ID attribute. Explicitly Overriding Output Format When Web API automatically converts output using content negotiation, it does so by matching Accept header media types to the GlobalConfiguration.Configuration.Formatters and the SupportedMediaTypes of each individual formatter. You can add and remove formatters to globally affect what formats are available and it’s easy to create and plug in custom formatters.The example project includes a JSONP formatter that can be plugged in to provide JSONP support for requests that have a callback= querystring parameter. Adding, removing or replacing formatters is a global option you can use to manipulate content. It’s beyond the scope of this introduction to show how it works, but you can review the sample code or check out my blog entry on the subject (http://goo.gl/UAzaR). If automatic processing is not desirable in a particular Controller method, you can override the response output explicitly by returning an HttpResponseMessage instance. HttpResponseMessage is similar to ActionResult in ASP.NET MVC in that it’s a common way to return an abstract result message that contains content. HttpResponseMessage s parsed by the Web API framework using standard interfaces to retrieve the response data, status code, headers and so on[MS2] . Web API turns every response – including those Controller methods that return static results – into HttpResponseMessage instances. Explicitly returning an HttpResponseMessage instance gives you full control over the output and lets you mostly bypass WebAPI’s post-processing of the HTTP response on your behalf. HttpResponseMessage allows you to customize the response in great detail. Web API’s attention to detail in the HTTP spec really shows; many HTTP options are exposed as properties and enumerations with detailed IntelliSense comments. Even if you’re new to building REST-based interfaces, the API guides you in the right direction for returning valid responses and response codes. For example, assume that I always want to return JSON from the GetAlbums() controller method and ignore the default media type content negotiation. To do this, I can adjust the output format and headers as shown in Listing 4.public HttpResponseMessage GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); // Create a new HttpResponse with Json Formatter explicitly var resp = new HttpResponseMessage(HttpStatusCode.OK); resp.Content = new ObjectContent<IEnumerable<Album>>( albums, new JsonMediaTypeFormatter()); // Get Default Formatter based on Content Negotiation //var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); resp.Headers.ConnectionClose = true; resp.Headers.CacheControl = new CacheControlHeaderValue(); resp.Headers.CacheControl.Public = true; return resp; } This example returns the same IEnumerable<Album> value, but it wraps the response into an HttpResponseMessage so you can control the entire HTTP message result including the headers, formatter and status code. In Listing 4, I explicitly specify the formatter using the JsonMediaTypeFormatter to always force the content to JSON.  If you prefer to use the default content negotiation with HttpResponseMessage results, you can create the Response instance using the Request.CreateResponse method:var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); This provides you an HttpResponse object that's pre-configured with the default formatter based on Content Negotiation. Once you have an HttpResponse object you can easily control most HTTP aspects on this object. What's sweet here is that there are many more detailed properties on HttpResponse than the core ASP.NET Response object, with most options being explicitly configurable with enumerations that make it easy to pick the right headers and response codes from a list of valid codes. It makes HTTP features available much more discoverable even for non-hardcore REST/HTTP geeks. Non-Serialized Results The output returned doesn’t have to be a serialized value but can also be raw data, like strings, binary data or streams. You can use the HttpResponseMessage.Content object to set a number of common Content classes. Listing 5 shows how to return a binary image using the ByteArrayContent class from a Controller method. [HttpGet] public HttpResponseMessage AlbumArt(string title) { var album = AlbumData.Current.FirstOrDefault(abl => abl.AlbumName.StartsWith(title)); if (album == null) { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found")); return resp; } // kinda silly - we would normally serve this directly // but hey - it's a demo. var http = new WebClient(); var imageData = http.DownloadData(album.AlbumImageUrl); // create response and return var result = new HttpResponseMessage(HttpStatusCode.OK); result.Content = new ByteArrayContent(imageData); result.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg"); return result; } The image retrieval from Amazon is contrived, but it shows how to return binary data using ByteArrayContent. It also demonstrates that you can easily return multiple types of content from a single controller method, which is actually quite common. If an error occurs - such as a resource can’t be found or a validation error – you can return an error response to the client that’s very specific to the error. In GetAlbumArt(), if the album can’t be found, we want to return a 404 Not Found status (and realistically no error, as it’s an image). Note that if you are not using HTTP Verb-based routing or not accessing a method that starts with Get/Post etc., you have to specify one or more HTTP Verb attributes on the method explicitly. Here, I used the [HttpGet] attribute to serve the image. Another option to handle the error could be to return a fixed placeholder image if no album could be matched or the album doesn’t have an image. When returning an error code, you can also return a strongly typed response to the client. For example, you can set the 404 status code and also return a custom error object (ApiMessageError is a class I defined) like this:return Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found") );   If the album can be found, the image will be returned. The image is downloaded into a byte[] array, and then assigned to the result’s Content property. I created a new ByteArrayContent instance and assigned the image’s bytes and the content type so that it displays properly in the browser. There are other content classes available: StringContent, StreamContent, ByteArrayContent, MultipartContent, and ObjectContent are at your disposal to return just about any kind of content. You can create your own Content classes if you frequently return custom types and handle the default formatter assignments that should be used to send the data out . Although HttpResponseMessage results require more code than returning a plain .NET value from a method, it allows much more control over the actual HTTP processing than automatic processing. It also makes it much easier to test your controller methods as you get a response object that you can check for specific status codes and output messages rather than just a result value. Routing Again Ok, let’s get back to the image example. Using the original routing we have setup using HTTP Verb routing there's no good way to serve the image. In order to return my album art image I’d like to use a URL like this: http://localhost/aspnetWebApi/albums/Dirty%20Deeds/image In order to create a URL like this, I have to create a new Controller because my earlier routes pointed to the AlbumApiController using HTTP Verb routing. HTTP Verb based routing is great for representing a single set of resources such as albums. You can map operations like add, delete, update and read easily using HTTP Verbs. But you cannot mix action based routing into a an HTTP Verb routing controller - you can only map HTTP Verbs and each method has to be unique based on parameter signature. You can't have multiple GET operations to methods with the same signature. So GetImage(string id) and GetAlbum(string title) are in conflict in an HTTP GET routing scenario. In fact, I was unable to make the above Image URL work with any combination of HTTP Verb plus Custom routing using the single Albums controller. There are number of ways around this, but all involve additional controllers.  Personally, I think it’s easier to use explicit Action routing and then add custom routes if you need to simplify your URLs further. So in order to accommodate some of the other examples, I created another controller – AlbumRpcApiController – to handle all requests that are explicitly routed via actions (/albums/rpc/AlbumArt) or are custom routed with explicit routes defined in the HttpConfiguration. I added the AlbumArt() method to this new AlbumRpcApiController class. For the image URL to work with the new AlbumRpcApiController, you need a custom route placed before the default route from Listing 1.RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); Now I can use either of the following URLs to access the image: Custom route: (/albums/rpc/{title}/image)http://localhost/aspnetWebApi/albums/PowerAge/image Action route: (/albums/rpc/action/{title})http://localhost/aspnetWebAPI/albums/rpc/albumart/PowerAge Sending Data to the Server To send data to the server and add a new album, you can use an HTTP POST operation. Since I’m using HTTP Verb-based routing in the original AlbumApiController, I can implement a method called PostAlbum()to accept a new album from the client. Listing 6 shows the Web API code to add a new album.public HttpResponseMessage PostAlbum(Album album) { if (!this.ModelState.IsValid) { // my custom error class var error = new ApiMessageError() { message = "Model is invalid" }; // add errors into our client error model for client foreach (var prop in ModelState.Values) { var modelError = prop.Errors.FirstOrDefault(); if (!string.IsNullOrEmpty(modelError.ErrorMessage)) error.errors.Add(modelError.ErrorMessage); else error.errors.Add(modelError.Exception.Message); } return Request.CreateResponse<ApiMessageError>(HttpStatusCode.Conflict, error); } // update song id which isn't provided foreach (var song in album.Songs) song.AlbumId = album.Id; // see if album exists already var matchedAlbum = AlbumData.Current .SingleOrDefault(alb => alb.Id == album.Id || alb.AlbumName == album.AlbumName); if (matchedAlbum == null) AlbumData.Current.Add(album); else matchedAlbum = album; // return a string to show that the value got here var resp = Request.CreateResponse(HttpStatusCode.OK, string.Empty); resp.Content = new StringContent(album.AlbumName + " " + album.Entered.ToString(), Encoding.UTF8, "text/plain"); return resp; } The PostAlbum() method receives an album parameter, which is automatically deserialized from the POST buffer the client sent. The data passed from the client can be either XML or JSON. Web API automatically figures out what format it needs to deserialize based on the content type and binds the content to the album object. Web API uses model binding to bind the request content to the parameter(s) of controller methods. Like MVC you can check the model by looking at ModelState.IsValid. If it’s not valid, you can run through the ModelState.Values collection and check each binding for errors. Here I collect the error messages into a string array that gets passed back to the client via the result ApiErrorMessage object. When a binding error occurs, you’ll want to return an HTTP error response and it’s best to do that with an HttpResponseMessage result. In Listing 6, I used a custom error class that holds a message and an array of detailed error messages for each binding error. I used this object as the content to return to the client along with my Conflict HTTP Status Code response. If binding succeeds, the example returns a string with the name and date entered to demonstrate that you captured the data. Normally, a method like this should return a Boolean or no response at all (HttpStatusCode.NoConent). The sample uses a simple static list to hold albums, so once you’ve added the album using the Post operation, you can hit the /albums/ URL to see that the new album was added. The client jQuery code to call the POST operation from the client with jQuery is shown in Listing 7. var id = new Date().getTime().toString(); var album = { "Id": id, "AlbumName": "Power Age", "Artist": "AC/DC", "YearReleased": 1977, "Entered": "2002-03-11T18:24:43.5580794-10:00", "AlbumImageUrl": http://ecx.images-amazon.com/images/…, "AmazonUrl": http://www.amazon.com/…, "Songs": [ { "SongName": "Rock 'n Roll Damnation", "SongLength": 3.12}, { "SongName": "Downpayment Blues", "SongLength": 4.22 }, { "SongName": "Riff Raff", "SongLength": 2.42 } ] } $.ajax( { url: "albums/", type: "POST", contentType: "application/json", data: JSON.stringify(album), processData: false, beforeSend: function (xhr) { // not required since JSON is default output xhr.setRequestHeader("Accept", "application/json"); }, success: function (result) { // reload list of albums page.loadAlbums(); }, error: function (xhr, status, p3, p4) { var err = "Error"; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); The code in Listing 7 creates an album object in JavaScript to match the structure of the .NET Album class. This object is passed to the $.ajax() function to send to the server as POST. The data is turned into JSON and the content type set to application/json so that the server knows what to convert when deserializing in the Album instance. The jQuery code hooks up success and failure events. Success returns the result data, which is a string that’s echoed back with an alert box. If an error occurs, jQuery returns the XHR instance and status code. You can check the XHR to see if a JSON object is embedded and if it is, you can extract it by de-serializing it and accessing the .message property. REST standards suggest that updates to existing resources should use PUT operations. REST standards aside, I’m not a big fan of separating out inserts and updates so I tend to have a single method that handles both. But if you want to follow REST suggestions, you can create a PUT method that handles updates by forwarding the PUT operation to the POST method:public HttpResponseMessage PutAlbum(Album album) { return PostAlbum(album); } To make the corresponding $.ajax() call, all you have to change from Listing 7 is the type: from POST to PUT. Model Binding with UrlEncoded POST Variables In the example in Listing 7 I used JSON objects to post a serialized object to a server method that accepted an strongly typed object with the same structure, which is a common way to send data to the server. However, Web API supports a number of different ways that data can be received by server methods. For example, another common way is to use plain UrlEncoded POST  values to send to the server. Web API supports Model Binding that works similar (but not the same) as MVC's model binding where POST variables are mapped to properties of object parameters of the target method. This is actually quite common for AJAX calls that want to avoid serialization and the potential requirement of a JSON parser on older browsers. For example, using jQUery you might use the $.post() method to send a new album to the server (albeit one without songs) using code like the following:$.post("albums/",{AlbumName: "Dirty Deeds", YearReleased: 1976 … },albumPostCallback); Although the code looks very similar to the client code we used before passing JSON, here the data passed is URL encoded values (AlbumName=Dirty+Deeds&YearReleased=1976 etc.). Web API then takes this POST data and maps each of the POST values to the properties of the Album object in the method's parameter. Although the client code is different the server can both handle the JSON object, or the UrlEncoded POST values. Dynamic Access to POST Data There are also a few options available to dynamically access POST data, if you know what type of data you're dealing with. If you have POST UrlEncoded values, you can dynamically using a FormsDataCollection:[HttpPost] public string PostAlbum(FormDataCollection form) { return string.Format("{0} - released {1}", form.Get("AlbumName"),form.Get("RearReleased")); } The FormDataCollection is a very simple object, that essentially provides the same functionality as Request.Form[] in ASP.NET. Request.Form[] still works if you're running hosted in an ASP.NET application. However as a general rule, while ASP.NET's functionality is always available when running Web API hosted inside of an  ASP.NET application, using the built in classes specific to Web API makes it possible to run Web API applications in a self hosted environment outside of ASP.NET. If your client is sending JSON to your server, and you don't want to map the JSON to a strongly typed object because you only want to retrieve a few simple values, you can also accept a JObject parameter in your API methods:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } There quite a few options available to you to receive data with Web API, which gives you more choices for the right tool for the job. Unfortunately one shortcoming of Web API is that POST data is always mapped to a single parameter. This means you can't pass multiple POST parameters to methods that receive POST data. It's possible to accept multiple parameters, but only one can map to the POST content - the others have to come from the query string or route values. I have a couple of Blog POSTs that explain what works and what doesn't here: Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API   Handling Delete Operations Finally, to round out the server API code of the album example we've been discussin, here’s the DELETE verb controller method that allows removal of an album by its title:public HttpResponseMessage DeleteAlbum(string title) { var matchedAlbum = AlbumData.Current.Where(alb => alb.AlbumName == title) .SingleOrDefault(); if (matchedAlbum == null) return new HttpResponseMessage(HttpStatusCode.NotFound); AlbumData.Current.Remove(matchedAlbum); return new HttpResponseMessage(HttpStatusCode.NoContent); } To call this action method using jQuery, you can use:$(".removeimage").live("click", function () { var $el = $(this).parent(".album"); var txt = $el.find("a").text(); $.ajax({ url: "albums/" + encodeURIComponent(txt), type: "Delete", success: function (result) { $el.fadeOut().remove(); }, error: jqError }); }   Note the use of the DELETE verb in the $.ajax() call, which routes to DeleteAlbum on the server. DELETE is a non-content operation, so you supply a resource ID (the title) via route value or the querystring. Routing Conflicts In all requests with the exception of the AlbumArt image example shown so far, I used HTTP Verb routing that I set up in Listing 1. HTTP Verb Routing is a recommendation that is in line with typical REST access to HTTP resources. However, it takes quite a bit of effort to create REST-compliant API implementations based only on HTTP Verb routing only. You saw one example that didn’t really fit – the return of an image where I created a custom route albums/{title}/image that required creation of a second controller and a custom route to work. HTTP Verb routing to a controller does not mix with custom or action routing to the same controller because of the limited mapping of HTTP verbs imposed by HTTP Verb routing. To understand some of the problems with verb routing, let’s look at another example. Let’s say you create a GetSortableAlbums() method like this and add it to the original AlbumApiController accessed via HTTP Verb routing:[HttpGet] public IQueryable<Album> SortableAlbums() { var albums = AlbumData.Current; // generally should be done only on actual queryable results (EF etc.) // Done here because we're running with a static list but otherwise might be slow return albums.AsQueryable(); } If you compile this code and try to now access the /albums/ link, you get an error: Multiple Actions were found that match the request. HTTP Verb routing only allows access to one GET operation per parameter/route value match. If more than one method exists with the same parameter signature, it doesn’t work. As I mentioned earlier for the image display, the only solution to get this method to work is to throw it into another controller. Because I already set up the AlbumRpcApiController I can add the method there. First, I should rename the method to SortableAlbums() so I’m not using a Get prefix for the method. This also makes the action parameter look cleaner in the URL - it looks less like a method and more like a noun. I can then create a new route that handles direct-action mapping:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); As I am explicitly adding a route segment – rpc – into the route template, I can now reference explicit methods in the Web API controller using URLs like this: http://localhost/AspNetWebApi/rpc/SortableAlbums Error Handling I’ve already done some minimal error handling in the examples. For example in Listing 6, I detected some known-error scenarios like model validation failing or a resource not being found and returning an appropriate HttpResponseMessage result. But what happens if your code just blows up or causes an exception? If you have a controller method, like this:[HttpGet] public void ThrowException() { throw new UnauthorizedAccessException("Unauthorized Access Sucka"); } You can call it with this: http://localhost/AspNetWebApi/albums/rpc/ThrowException The default exception handling displays a 500-status response with the serialized exception on the local computer only. When you connect from a remote computer, Web API throws back a 500  HTTP Error with no data returned (IIS then adds its HTML error page). The behavior is configurable in the GlobalConfiguration:GlobalConfiguration .Configuration .IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Never; If you want more control over your error responses sent from code, you can throw explicit error responses yourself using HttpResponseException. When you throw an HttpResponseException the response parameter is used to generate the output for the Controller action. [HttpGet] public void ThrowError() { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.BadRequest, new ApiMessageError("Your code stinks!")); throw new HttpResponseException(resp); } Throwing an HttpResponseException stops the processing of the controller method and immediately returns the response you passed to the exception. Unlike other Exceptions fired inside of WebAPI, HttpResponseException bypasses the Exception Filters installed and instead just outputs the response you provide. In this case, the serialized ApiMessageError result string is returned in the default serialization format – XML or JSON. You can pass any content to HttpResponseMessage, which includes creating your own exception objects and consistently returning error messages to the client. Here’s a small helper method on the controller that you might use to send exception info back to the client consistently:private void ThrowSafeException(string message, HttpStatusCode statusCode = HttpStatusCode.BadRequest) { var errResponse = Request.CreateResponse<ApiMessageError>(statusCode, new ApiMessageError() { message = message }); throw new HttpResponseException(errResponse); } You can then use it to output any captured errors from code:[HttpGet] public void ThrowErrorSafe() { try { List<string> list = null; list.Add("Rick"); } catch (Exception ex) { ThrowSafeException(ex.Message); } }   Exception Filters Another more global solution is to create an Exception Filter. Filters in Web API provide the ability to pre- and post-process controller method operations. An exception filter looks at all exceptions fired and then optionally creates an HttpResponseMessage result. Listing 8 shows an example of a basic Exception filter implementation.public class UnhandledExceptionFilter : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { HttpStatusCode status = HttpStatusCode.InternalServerError; var exType = context.Exception.GetType(); if (exType == typeof(UnauthorizedAccessException)) status = HttpStatusCode.Unauthorized; else if (exType == typeof(ArgumentException)) status = HttpStatusCode.NotFound; var apiError = new ApiMessageError() { message = context.Exception.Message }; // create a new response and attach our ApiError object // which now gets returned on ANY exception result var errorResponse = context.Request.CreateResponse<ApiMessageError>(status, apiError); context.Response = errorResponse; base.OnException(context); } } Exception Filter Attributes can be assigned to an ApiController class like this:[UnhandledExceptionFilter] public class AlbumRpcApiController : ApiController or you can globally assign it to all controllers by adding it to the HTTP Configuration's Filters collection:GlobalConfiguration.Configuration.Filters.Add(new UnhandledExceptionFilter()); The latter is a great way to get global error trapping so that all errors (short of hard IIS errors and explicit HttpResponseException errors) return a valid error response that includes error information in the form of a known-error object. Using a filter like this allows you to throw an exception as you normally would and have your filter create a response in the appropriate output format that the client expects. For example, an AJAX application can on failure expect to see a JSON error result that corresponds to the real error that occurred rather than a 500 error along with HTML error page that IIS throws up. You can even create some custom exceptions so you can differentiate your own exceptions from unhandled system exceptions - you often don't want to display error information from 'unknown' exceptions as they may contain sensitive system information or info that's not generally useful to users of your application/site. This is just one example of how ASP.NET Web API is configurable and extensible. Exception filters are just one example of how you can plug-in into the Web API request flow to modify output. Many more hooks exist and I’ll take a closer look at extensibility in Part 2 of this article in the future. Summary Web API is a big improvement over previous Microsoft REST and AJAX toolkits. The key features to its usefulness are its ease of use with simple controller based logic, familiar MVC-style routing, low configuration impact, extensibility at all levels and tight attention to exposing and making HTTP semantics easily discoverable and easy to use. Although none of the concepts used in Web API are new or radical, Web API combines the best of previous platforms into a single framework that’s highly functional, easy to work with, and extensible to boot. I think that Microsoft has hit a home run with Web API. Related Resources Where does ASP.NET Web API fit? Sample Source Code on GitHub Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API Creating a JSONP Formatter for ASP.NET Web API Removing the XML Formatter from ASP.NET Web API Applications© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >