Search Results

Search found 42627 results on 1706 pages for 'type conversion'.

Page 147/1706 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Extending Oracle CEP with Predictive Analytics

    - by vikram.shukla(at)oracle.com
    Introduction: OCEP is often used as a business rules engine to execute a set of business logic rules via CQL statements, and take decisions based on the outcome of those rules. There are times where configuring rules manually is sufficient because an application needs to deal with only a small and well-defined set of static rules. However, in many situations customers don't want to pre-define such rules for two reasons. First, they are dealing with events with lots of columns and manually crafting such rules for each column or a set of columns and combinations thereof is almost impossible. Second, they are content with probabilistic outcomes and do not care about 100% precision. The former is the case when a user is dealing with data with high dimensionality, the latter when an application can live with "false" positives as they can be discarded after further inspection, say by a Human Task component in a Business Process Management software. The primary goal of this blog post is to show how this can be achieved by combining OCEP with Oracle Data Mining® and leveraging the latter's rich set of algorithms and functionality to do predictive analytics in real time on streaming events. The secondary goal of this post is also to show how OCEP can be extended to invoke any arbitrary external computation in an RDBMS from within CEP. The extensible facility is known as the JDBC cartridge. The rest of the post describes the steps required to achieve this: We use the dataset available at http://blogs.oracle.com/datamining/2010/01/fraud_and_anomaly_detection_made_simple.html to showcase the capabilities. We use it to show how transaction anomalies or fraud can be detected. Building the model: Follow the self-explanatory steps described at the above URL to build the model.  It is very simple - it uses built-in Oracle Data Mining PL/SQL packages to cleanse, normalize and build the model out of the dataset.  You can also use graphical Oracle Data Miner®  to build the models. To summarize, it involves: Specifying which algorithms to use. In this case we use Support Vector Machines as we're trying to find anomalies in highly dimensional dataset.Build model on the data in the table for the algorithms specified. For this example, the table was populated in the scott/tiger schema with appropriate privileges. Configuring the Data Source: This is the first step in building CEP application using such an integration.  Our datasource looks as follows in the server config file.  It is advisable that you use the Visualizer to add it to the running server dynamically, rather than manually edit the file.    <data-source>         <name>DataMining</name>         <data-source-params>             <jndi-names>                 <element>DataMining</element>             </jndi-names>             <global-transactions-protocol>OnePhaseCommit</global-transactions-protocol>         </data-source-params>         <connection-pool-params>             <credential-mapping-enabled></credential-mapping-enabled>             <test-table-name>SQL SELECT 1 from DUAL</test-table-name>             <initial-capacity>1</initial-capacity>             <max-capacity>15</max-capacity>             <capacity-increment>1</capacity-increment>         </connection-pool-params>         <driver-params>             <use-xa-data-source-interface>true</use-xa-data-source-interface>             <driver-name>oracle.jdbc.OracleDriver</driver-name>             <url>jdbc:oracle:thin:@localhost:1522:orcl</url>             <properties>                 <element>                     <value>scott</value>                     <name>user</name>                 </element>                 <element>                     <value>{Salted-3DES}AzFE5dDbO2g=</value>                     <name>password</name>                 </element>                                 <element>                     <name>com.bea.core.datasource.serviceName</name>                     <value>oracle11.2g</value>                 </element>                 <element>                     <name>com.bea.core.datasource.serviceVersion</name>                     <value>11.2.0</value>                 </element>                 <element>                     <name>com.bea.core.datasource.serviceObjectClass</name>                     <value>java.sql.Driver</value>                 </element>             </properties>         </driver-params>     </data-source>   Designing the EPN: The EPN is very simple in this example. We briefly describe each of the components. The adapter ("DataMiningAdapter") reads data from a .csv file and sends it to the CQL processor downstream. The event payload here is same as that of the table in the database (refer to the attached project or do a "desc table-name" from a SQL*PLUS prompt). While this is for convenience in this example, it need not be the case. One can still omit fields in the streaming events, and need not match all columns in the table on which the model was built. Better yet, it does not even need to have the same name as columns in the table, as long as you alias them in the USING clause of the mining function. (Caveat: they still need to draw values from a similar universe or domain, otherwise it constitutes incorrect usage of the model). There are two things in the CQL processor ("DataMiningProc") that make scoring possible on streaming events. 1.      User defined cartridge function Please refer to the OCEP CQL reference manual to find more details about how to define such functions. We include the function below in its entirety for illustration. <?xml version="1.0" encoding="UTF-8"?> <jdbcctxconfig:config     xmlns:jdbcctxconfig="http://www.bea.com/ns/wlevs/config/application"     xmlns:jc="http://www.oracle.com/ns/ocep/config/jdbc">        <jc:jdbc-ctx>         <name>Oracle11gR2</name>         <data-source>DataMining</data-source>               <function name="prediction2">                                 <param name="CQLMONTH" type="char"/>                      <param name="WEEKOFMONTH" type="int"/>                      <param name="DAYOFWEEK" type="char" />                      <param name="MAKE" type="char" />                      <param name="ACCIDENTAREA"   type="char" />                      <param name="DAYOFWEEKCLAIMED"  type="char" />                      <param name="MONTHCLAIMED" type="char" />                      <param name="WEEKOFMONTHCLAIMED" type="int" />                      <param name="SEX" type="char" />                      <param name="MARITALSTATUS"   type="char" />                      <param name="AGE" type="int" />                      <param name="FAULT" type="char" />                      <param name="POLICYTYPE"   type="char" />                      <param name="VEHICLECATEGORY"  type="char" />                      <param name="VEHICLEPRICE" type="char" />                      <param name="FRAUDFOUND" type="int" />                      <param name="POLICYNUMBER" type="int" />                      <param name="REPNUMBER" type="int" />                      <param name="DEDUCTIBLE"   type="int" />                      <param name="DRIVERRATING"  type="int" />                      <param name="DAYSPOLICYACCIDENT"   type="char" />                      <param name="DAYSPOLICYCLAIM" type="char" />                      <param name="PASTNUMOFCLAIMS" type="char" />                      <param name="AGEOFVEHICLES" type="char" />                      <param name="AGEOFPOLICYHOLDER" type="char" />                      <param name="POLICEREPORTFILED" type="char" />                      <param name="WITNESSPRESNT" type="char" />                      <param name="AGENTTYPE" type="char" />                      <param name="NUMOFSUPP" type="char" />                      <param name="ADDRCHGCLAIM"   type="char" />                      <param name="NUMOFCARS" type="char" />                      <param name="CQLYEAR" type="int" />                      <param name="BASEPOLICY" type="char" />                                     <return-component-type>char</return-component-type>                                                      <sql><![CDATA[             SELECT to_char(PREDICTION_PROBABILITY(CLAIMSMODEL, '0' USING *))               AS probability             FROM (SELECT  :CQLMONTH AS MONTH,                                            :WEEKOFMONTH AS WEEKOFMONTH,                          :DAYOFWEEK AS DAYOFWEEK,                           :MAKE AS MAKE,                           :ACCIDENTAREA AS ACCIDENTAREA,                           :DAYOFWEEKCLAIMED AS DAYOFWEEKCLAIMED,                           :MONTHCLAIMED AS MONTHCLAIMED,                           :WEEKOFMONTHCLAIMED,                             :SEX AS SEX,                           :MARITALSTATUS AS MARITALSTATUS,                            :AGE AS AGE,                           :FAULT AS FAULT,                           :POLICYTYPE AS POLICYTYPE,                            :VEHICLECATEGORY AS VEHICLECATEGORY,                           :VEHICLEPRICE AS VEHICLEPRICE,                           :FRAUDFOUND AS FRAUDFOUND,                           :POLICYNUMBER AS POLICYNUMBER,                           :REPNUMBER AS REPNUMBER,                           :DEDUCTIBLE AS DEDUCTIBLE,                            :DRIVERRATING AS DRIVERRATING,                           :DAYSPOLICYACCIDENT AS DAYSPOLICYACCIDENT,                            :DAYSPOLICYCLAIM AS DAYSPOLICYCLAIM,                           :PASTNUMOFCLAIMS AS PASTNUMOFCLAIMS,                           :AGEOFVEHICLES AS AGEOFVEHICLES,                           :AGEOFPOLICYHOLDER AS AGEOFPOLICYHOLDER,                           :POLICEREPORTFILED AS POLICEREPORTFILED,                           :WITNESSPRESNT AS WITNESSPRESENT,                           :AGENTTYPE AS AGENTTYPE,                           :NUMOFSUPP AS NUMOFSUPP,                           :ADDRCHGCLAIM AS ADDRCHGCLAIM,                            :NUMOFCARS AS NUMOFCARS,                           :CQLYEAR AS YEAR,                           :BASEPOLICY AS BASEPOLICY                 FROM dual)                 ]]>         </sql>        </function>     </jc:jdbc-ctx> </jdbcctxconfig:config> 2.      Invoking the function for each event. Once this function is defined, you can invoke it from CQL as follows: <?xml version="1.0" encoding="UTF-8"?> <wlevs:config xmlns:wlevs="http://www.bea.com/ns/wlevs/config/application">   <processor>     <name>DataMiningProc</name>     <rules>        <query id="q1"><![CDATA[                     ISTREAM(SELECT S.CQLMONTH,                                   S.WEEKOFMONTH,                                   S.DAYOFWEEK, S.MAKE,                                   :                                         S.BASEPOLICY,                                    C.F AS probability                                                 FROM                                 StreamDataChannel [NOW] AS S,                                 TABLE(prediction2@Oracle11gR2(S.CQLMONTH,                                      S.WEEKOFMONTH,                                      S.DAYOFWEEK,                                       S.MAKE, ...,                                      S.BASEPOLICY) AS F of char) AS C)                       ]]></query>                 </rules>               </processor>           </wlevs:config>   Finally, the last stage in the EPN prints out the probability of the event being an anomaly. One can also define a threshold in CQL to filter out events that are normal, i.e., below a certain mark as defined by the analyst or designer. Sample Runs: Now let's see how this behaves when events are streamed through CEP. We use only two events for brevity, one normal and other one not. This is one of the "normal" looking events and the probability of it being anomalous is less than 60%. Event is: eventType=DataMiningOutEvent object=q1  time=2904821976256 S.CQLMONTH=Dec, S.WEEKOFMONTH=5, S.DAYOFWEEK=Wednesday, S.MAKE=Honda, S.ACCIDENTAREA=Urban, S.DAYOFWEEKCLAIMED=Tuesday, S.MONTHCLAIMED=Jan, S.WEEKOFMONTHCLAIMED=1, S.SEX=Female, S.MARITALSTATUS=Single, S.AGE=21, S.FAULT=Policy Holder, S.POLICYTYPE=Sport - Liability, S.VEHICLECATEGORY=Sport, S.VEHICLEPRICE=more than 69000, S.FRAUDFOUND=0, S.POLICYNUMBER=1, S.REPNUMBER=12, S.DEDUCTIBLE=300, S.DRIVERRATING=1, S.DAYSPOLICYACCIDENT=more than 30, S.DAYSPOLICYCLAIM=more than 30, S.PASTNUMOFCLAIMS=none, S.AGEOFVEHICLES=3 years, S.AGEOFPOLICYHOLDER=26 to 30, S.POLICEREPORTFILED=No, S.WITNESSPRESENT=No, S.AGENTTYPE=External, S.NUMOFSUPP=none, S.ADDRCHGCLAIM=1 year, S.NUMOFCARS=3 to 4, S.CQLYEAR=1994, S.BASEPOLICY=Liability, probability=.58931702982118561 isTotalOrderGuarantee=true\nAnamoly probability: .58931702982118561 However, the following event is scored as an anomaly with a very high probability of  89%. So there is likely to be something wrong with it. A close look reveals that the value of "deductible" field (10000) is not "normal". What exactly constitutes normal here?. If you run the query on the database to find ALL distinct values for the "deductible" field, it returns the following set: {300, 400, 500, 700} Event is: eventType=DataMiningOutEvent object=q1  time=2598483773496 S.CQLMONTH=Dec, S.WEEKOFMONTH=5, S.DAYOFWEEK=Wednesday, S.MAKE=Honda, S.ACCIDENTAREA=Urban, S.DAYOFWEEKCLAIMED=Tuesday, S.MONTHCLAIMED=Jan, S.WEEKOFMONTHCLAIMED=1, S.SEX=Female, S.MARITALSTATUS=Single, S.AGE=21, S.FAULT=Policy Holder, S.POLICYTYPE=Sport - Liability, S.VEHICLECATEGORY=Sport, S.VEHICLEPRICE=more than 69000, S.FRAUDFOUND=0, S.POLICYNUMBER=1, S.REPNUMBER=12, S.DEDUCTIBLE=10000, S.DRIVERRATING=1, S.DAYSPOLICYACCIDENT=more than 30, S.DAYSPOLICYCLAIM=more than 30, S.PASTNUMOFCLAIMS=none, S.AGEOFVEHICLES=3 years, S.AGEOFPOLICYHOLDER=26 to 30, S.POLICEREPORTFILED=No, S.WITNESSPRESENT=No, S.AGENTTYPE=External, S.NUMOFSUPP=none, S.ADDRCHGCLAIM=1 year, S.NUMOFCARS=3 to 4, S.CQLYEAR=1994, S.BASEPOLICY=Liability, probability=.89171554529576691 isTotalOrderGuarantee=true\nAnamoly probability: .89171554529576691 Conclusion: By way of this example, we show: real-time scoring of events as they flow through CEP leveraging Oracle Data Mining.how CEP applications can invoke complex arbitrary external computations (function shipping) in an RDBMS.

    Read the article

  • Do AdWords conversions only track AdWords visitors?

    - by atticae
    I have set up an AdWords conversion and put the conversion tracking JS code on a test page. However, I don't see any conversions tracked when I visit it. Does the AdWords conversion tracking only register your conversion if you come to the site by clicking on an AdWords campaign? Google's help page advises me to test the code by clicking a campaign. However, I would like to use the tracking to track all conversions, not just AdWords. I considered using Analytics as well, but it seems you can only track via url there, not JS, which would mean I had to restructure a part of my page. (Because currently a conversion appears does not necessarily happen on a different URL.) Is AdWords tracking not a viable solution to track all visitor conversions on my site?

    Read the article

  • What type of encoding can I use to make a string shorter?

    - by Abe Miessler
    I am interested in encoding a string I have and I am curious if there is a type of encoding that can be used that will only include alpha and numeric characters and would preferably shorten the number of characters needed to represent the string. So far I have looked at using Base64 encoding to do this but it appears to make my string longer and sometimes includes == which I would like to avoid. Example: test name|120101 becomes dGVzdCBuYW1lfDEyMDEwMQ== which goes from 16 to 24 characters and includes non-alphanumeric. Does anyone know of a different type of encoding that I could use that will achieve my requirements? Bonus points if it's either built into the .NET framework or there exists a third party library that will do the encoding.

    Read the article

  • Unity works on my PC but not on the Server. What did I miss?

    - by Erik France
    I have a web service using Microsoft Unity to hook the pieces together.  It all works fine on my PC but when I put it on the web server, I receive this error message: System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: The value of the property 'type' cannot be parsed. The error is: Method 'GetClaimsForUser' in type 'WebService.Implementation.ClaimsRetriever' from assembly 'WebService.Implementation, Version=1.0.0.0, Culture=neutral, PublicKeyToken=62cac0f1a908971a' does not have an implementation. If I look at the web.config, I see the following: <unity>     <typeAliases>       <typeAlias alias="ITokenGenerator" type="WebService.Interfaces.ITokenGenerator, WebService.Interfaces" />       <typeAlias alias="TokenGenerator" type="WebService.Implementation.TokenGenerator, WebService.Implementation" />       <typeAlias alias="IClaimsRetriever" type="WebService.Interfaces.IClaimsRetriever, WebService.Interfaces" />       <typeAlias alias="ClaimsRetriever" type="WebService.Implementation.ClaimsRetriever, WebService.Implementation" />       <typeAlias alias="TokenGeneratorSettings" type="WebService.Implementation.TokenGeneratorSettings, WebService.Implementation" />       <typeAlias alias="String" type="System.String, mscorlib" />     </typeAliases>     <containers>       <container>         <types>           <type type="ITokenGenerator" mapTo="TokenGenerator">             <typeConfig extensionType="Microsoft.Practices.Unity.Configuration.TypeInjectionElement, Microsoft.Practices.Unity.Configuration">               <constructor>                 <param name="retriever" parameterType="IClaimsRetriever">                   <dependency />                 </param>                 <param name="settings" parameterType="TokenGeneratorSettings">                   <dependency />                 </param>               </constructor>             </typeConfig>           </type>           <type type="IClaimsRetriever" mapTo="ClaimsRetriever">             <typeConfig extensionType="Microsoft.Practices.Unity.Configuration.TypeInjectionElement, Microsoft.Practices.Unity.Configuration">               <constructor>                 <param name="connectionStringName" parameterType="String">                   <value value="devDatabase" type="String" />                 </param>               </constructor>             </typeConfig>           </type>         </types>       </container>     </containers>   </unity> I have another web service, using an almost identical config running on the web server.  But this new web service will not run. Any ideas on what I have not told Unity to do?  Or maybe what I told Unity to do incorrectly?

    Read the article

  • How to measure the conversion rate of your Amazon affiliate program?

    - by user359650
    I plan on selling products through the Amazon affiliate program. What I know I can track is: -what products people view on my website (default Google Analytics pageview behaviour). -what affiliate links people click on my website (with GA _trackEvent). What am I missing is: -what products people end up buying after clicking on the affiliate links. Does the Amazon affiliate program offers you any mechanism for linking a purchase with some data from your website? I noticed that I was able to add custom parameters and values to my affiliate links and the link checker was still happy with them, if Amazon gave the links that initiated an order then I would be able to cross reference the orders using custom parameters...

    Read the article

  • How can I create a VBO of a type determined at runtime?

    - by lapin
    I've written a Vbo template class to work with OpenGL. I'd like to set the type from a config file at runtime. For example: <vbo type="bump_vt" ... /> Vbo* pVbo = new Vbo(bump_vt, ...); Is there some way I can do this without a large if-else block such as: if( sType.compareTo("bump_vt") == 0 ) Vbo* pVbo = new Vbo(bump_vt, ...); else if ... I'm writing for multiple platforms in C++.

    Read the article

  • Which is the appropriate content-type meta tag value?

    - by Argoron
    I have a question about the meta tag content-type. When starting to build my site (HTML+PHP+JS), I copied a lot of the meta tags over from elsewhere, and I have, amongst others, the following: <meta http-equiv="content-type" content="application/xhtml+xml; charset=iso-8859-1" /> Now, I've seen that tag is being used a lot with the value "text/html". I've been searching the web but could not find a comprehensive explanation regarding what the difference between both is. The "text/html" intuitively sounds more straightforward to me. Should I change my tag to that, or might the "application/xhtml+xml" be an equivalent solution ? Alternatively, can anyone point me to a resource where the different values for these tags are listed and explained in a clear manner? Thanks in advance

    Read the article

  • Which is the appropriate content-type meta tag value ?

    - by Argoron
    I have a question about the meta tag content-type. When starting to build my site (HTML+PHP+JS), I copied a lot of the meta tags over from elsewhere, and I have, amongst others, the following: <meta http-equiv="content-type" content="application/xhtml+xml; charset=iso-8859-1" /> Now, I've seen that tag is being used a lot with the value "text/html". I've been searching the web but could not find a comprehensive explanation regarding what the difference between both is. The "text/html" intuitively sounds more straightforward to me. Should I change my tag to that, or might the "application/xhtml+xml" be an equivalent solution ? Alternatively, can anyone point me to a resource where the different values for these tags are listed and explained in a clear manner? Thanks in advance

    Read the article

  • Which MIME type to compress? and what If I omit the `type` attribute from the HTML?

    - by rockyraw
    Per my request, my webhost had turned mod_deflate ON. In my Cpanel I now have an "Optimize Website" button. Inside that menu I could either choose: "Compress all content" or "Compress the specified MIME types" with the following default MIME types: "text/html text/plain text/xml" Which option should I choose and why? If I choose option 2, which types should I add (is there a recommended list with the exact way they should be written)? According to Google recommendations, I have omitted the type="text/css" attributes from all CSS references, as well as the type="text/javascript" attributes from all script references. Would this hinder the "gzipping" process?

    Read the article

  • F# - When do you use a type class instead of a record when you do not want to use mutable fields?

    - by fairflow
    I'm imagining a situation where you are creating an F# module in a purely functional style. This means objects do not have mutable fields and are not modified in place. I'm assuming for simplicity that there is no need to use .NET objects or other kinds of objects. There are two possible ways of implementing an object-oriented kind of solution: the first is to use type classes and the second to use records which have fields of functional type, to implement methods. I imagine you'd use classes when you want to use inheritance but that otherwise records would be adequate, if perhaps clumsier to express. Or do you find classes more convenient than records in any case?

    Read the article

  • jqGrid: sort by index

    - by David__
    I am having trouble getting a column to sort by an index other than the 'name' value. In this case I am trying to sort the aggregation type column (values are days of the week) by the week order, rather than alphanumeric order. To do this I added an index column ('Aggregation type-index') that has the days of week an integers. However with this configuration, it fails to sort that column by index or name. Can someone point me the err in my ways? I posted all the js and css that is on the page, because I am also having two other issues, that if you notice the problem great, otherwise I'll keep hunting. I want to be able to enable the column reodering and be able to resize the grid (Both shown at http://trirand.com/blog/jqgrid/jqgrid.html under the new in 3.6 tab). Both options are not working either. <link rel="stylesheet" type="text/css" href="/static/latest_ui/themes/base/jquery.ui.all.css"/> <link rel="stylesheet" type="text/css" media="print" href="/static/css/print.css"/> <script src="/static/js/jquery-1.7.2.min.js" type="text/javascript"></script> <script src="/static/latest_ui/ui/jquery.ui.core.js"></script> <script src="/static/latest_ui/ui/jquery.ui.widget.js"></script> <script src="/static/latest_ui/ui/jquery.ui.position.js"></script> <script src="/static/latest_ui/ui/jquery.ui.button.js"></script> <script src="/static/latest_ui/ui/jquery.ui.menu.js"></script> <script src="/static/latest_ui/ui/jquery.ui.menubar.js"></script> <script src="/static/latest_ui/ui/jquery.ui.tabs.js"></script> <script src="/static/latest_ui/ui/jquery.ui.datepicker.js"></script> <script src="/static/js/custom.js"></script> <link rel="stylesheet" type="text/css" media="all" href="/static/css/custom_style.css" /> <link rel="stylesheet" type="text/css" media="all" href="/static/css/custom_colors.css" /> <link rel="stylesheet" type="text/css" media="screen" href="/static/css/ui.jqgrid.css" /> <body> <table id="grid_reports"></table> <div id='pager'></div> </body> <script src="/static/latest_ui/ui/jquery.ui.resizable.js"></script> <script src="/static/latest_ui/ui/jquery.ui.sortable.js"></script> <script src="/static/js/grid.locale-en.js" type="text/javascript"></script> <script src="/static/js/jquery.jqGrid.min.js" type="text/javascript"></script> <script src="/static/js/jqGrid_src/grid.jqueryui.js"></script> <script> $(function() { jQuery("#grid_reports").jqGrid({ sortable: true, datatype: "local", height: 500, width: 300, colNames:['Series', 'Agg Type', 'Days'], colModel:[{'index': 'By series', 'align': 'left', 'sorttype': 'text', 'name': 'By series', 'width': 65}, {'index': 'Aggregation type-index', 'align': 'left', 'sorttype': 'int', 'name': 'Aggregation type', 'width': 75}, {'index': 'Days since event', 'align': 'center', 'sorttype': 'text', 'name': 'Days since event', 'width': 50}], rowNum:50, pager: '#pager', sortname: 'Aggregation type', sortorder: 'desc', altRows: true, rowList:[20,50,100,500,10000], viewrecords: true, gridview: true, caption: "Report for 6/19/12" }); jQuery("#grid_reports").navGrid("#pager",{edit:false,add:false,del:false}); jQuery("#grid_reports").jqGrid('gridResize',{minWidth:60,maxWidth:2500,minHeight:80, maxHeight:2500}); var mydata = [{'Days since event': 132, 'Aggregation type': 'Date=Fri', 'By series': 'mean', 'Aggregation type-index': 5}, {'DIM at event': 178, 'Aggregation type': 'Date=Thu', 'By series': 'mean', 'Aggregation type-index': 4}, {'DIM at event': 172, 'Aggregation type': 'Date=Wed', 'By series': 'mean', 'Aggregation type-index': 3}, {'DIM at event': 146, 'Aggregation type': 'Date=Tue', 'By series': 'mean', 'Aggregation type-index': 2}, {'DIM at event': 132, 'Aggregation type': 'Date=Sat', 'By series': 'mean', 'Aggregation type-index': 6}, {'DIM at event': 162, 'Aggregation type': 'Date=Mon', 'By series': 'mean', 'Aggregation type-index': 1}, {'DIM at event': 139, 'Aggregation type': 'Date=Sun', 'By series': 'mean', 'Aggregation type-index': 0}]; for(var i=0;i<=mydata.length;i++) jQuery("#grid_reports").jqGrid('addRowData',i+1,mydata[i]); }); </script>

    Read the article

  • drupal_get_form is not passing along node array

    - by ElectronicBlacksmith
    I have not been able to get drupal_get_form to pass on the node data. Code snippets are below. The drupal_get_form documentation (api.drupal.org) states that it will pass on the extra parameters. I am basing the node data not being passed because (apparently) $node['language'] is not defined in hook_form which causes $form['qqq'] not to be created and thus the preview button shows up. My goal here is that the preview button show up using path "node/add/author" but doesn't show up for "milan/author/add". Any alternative methods for achieving this goal would be helpful but the question I want answered is in the preceding paragraph. Everything I've read indicates that it should work. This menu item $items['milan/author/add'] = array( 'title' = 'Add Author', 'page callback' = 'get_author_form', 'access arguments' = array('access content'), 'file' = 'author.pages.inc', ); calls this code function get_author_form() { //return node_form(NULL,NULL); //return drupal_get_form('author_form'); return author_ajax_form('author'); } function author_ajax_form($type) { global $user; module_load_include('inc', 'node', 'node.pages'); $types = node_get_types(); $type = isset($type) ? str_replace('-', '_', $type) : NULL; // If a node type has been specified, validate its existence. if (isset($types[$type]) && node_access('create', $type)) { // Initialize settings: $node = array('uid' = $user-uid, 'name' = (isset($user-name) ? $user-name : ''), 'type' = $type, 'language' = 'bbb','bbb' = 'TRUE'); $output = drupal_get_form($type .'_node_form', $node); } return $output; } And here is the hook_form and hook_form_alter code function author_form_author_node_form_alter(&$form, &$form_state) { $form['author']=NULL; $form['taxonomy']=NULL; $form['options']=NULL; $form['menu']=NULL; $form['comment_settings']=NULL; $form['files']=NULL; $form['revision_information']=NULL; $form['attachments']=NULL; if($form["qqq"]) { $form['buttons']['preview']=NULL; } } function author_form(&$node) { return make_author_form(&$node); } function make_author_form(&$node) { global $user; $type = node_get_types('type', $node); $node = author_make_title($node); drupal_set_breadcrumb(array(l(t('Home'), NULL), l(t($node-title), 'node/' . $node-nid))); $form['authorset'] = array( '#type' = 'fieldset', '#title' = t('Author'), '#weight' = -50, '#collapsible' = FALSE, '#collapsed' = FALSE, ); $form['author_id'] = array( '#access' = user_access('create pd_recluse entries'), '#type' = 'hidden', '#default_value' = $node-author_id, '#weight' = -20 ); $form['authorset']['last_name'] = array( '#type' = 'textfield', '#title' = t('Last Name'), '#maxlength' = 60, '#default_value' = $node-last_name ); $form['authorset']['first_name'] = array( '#type' = 'textfield', '#title' = t('First Name'), '#maxlength' = 60, '#default_value' = $node-first_name ); $form['authorset']['middle_name'] = array( '#type' = 'textfield', '#title' = t('Middle Name'), '#maxlength' = 60, '#default_value' = $node-middle_name ); $form['authorset']['suffix_name'] = array( '#type' = 'textfield', '#title' = t('Suffix Name'), '#maxlength' = 14, '#default_value' = $node-suffix_name ); $form['authorset']['body_filter']['body'] = array( '#access' = user_access('create pd_recluse entries'), '#type' = 'textarea', '#title' = 'Describe Author', '#default_value' = $node-body, '#required' = FALSE, '#weight' = -19 ); $form['status'] = array( '#type' = 'hidden', '#default_value' = '1' ); $form['promote'] = array( '#type' = 'hidden', '#default_value' = '1' ); $form['name'] = array( '#type' = 'hidden', '#default_value' = $user-name ); $form['format'] = array( '#type' = 'hidden', '#default_value' = '1' ); // NOTE in node_example there is some addition code here not needed for this simple node-type $thepath='milan/author'; if($_REQUEST["theletter"]) { $thepath .= "/" . $_REQUEST["theletter"]; } if($node['language']) { $thepath='milan/authorajaxclose'; $form['qqq'] = array( '#type' = 'hidden', '#default_value' = '1' ); } $form['#redirect'] = $thepath; return $form; } That menu path coincides with this theme (PHPTemplate)

    Read the article

  • Get specifc value from JSon string using JSon.Net

    - by dean nolan
    I am trying to get a value from a JSon formatted string. It was to get album info from a website called Freebase. My result is like this: { "a0": { "code": "/api/status/error", "messages": [ { "code": "/api/status/error/mql/result", "info": { "count": 20, "result": [ { "album": [ { "name": "Definitely Maybe", "release_date": "1994-08-30" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Most Wanted Rock 1", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Alternative 90s", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Live Forever: Best of Britpop", "release_date": "2003-03-03" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "The Best... In the World... Ever! Volume 5", "release_date": "1997-03-31" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Live 4 Ever", "release_date": "1998-06-29" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "De Afrekening, Volume 8", "release_date": "1994" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Now That's What I Call Music! 33", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Q: Anthems", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "The Best Anthems... Ever! Volume 2", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "1995 Mercury Music Prize: Ten Albums of the Year", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Now That's What I Call Music! 1994", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Indie Top 20, Volume 21", "release_date": "1995" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Dad Rocks!", "release_date": "2006-06-05" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Untitled", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "The Greatest Hits of 1994", "release_date": "1994-10" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Top of the Pops 2", "release_date": "2000-03-27" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Q: Anthems (disc 1)", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Jamie Oliver's Cookin'", "release_date": "2001" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Killer Buzz", "release_date": "2001" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" } ] }, "message": "Unique query may have at most one result. Got 20", "path": "", "query": { "album": [ { "name": null, "release_date": null, "sort": "release_date" } ], "artist": "Oasis", "error_inside": ".", "name": "Live Forever", "type": "/music/track" } } ] }, "code": "/api/status/ok", "status": "200 OK", "transaction_id": "cache;cache04.p01.sjc1:8101;2010-03-30T18:04:20Z;0035" } I am looking to get the first album title, Definitely Maybe, from this list. I have tried parsing the string like this: JObject o = JObject.Parse(jsonString); string album = (string)o[""]; But no matter what I have tried I don't know what to put in those quotes. How would I get this specific value or be able to search for it? Thanks

    Read the article

  • Why is there an implicit conversion from Float/Double to BigDecimal, but not from String?

    - by soc
    Although the situation of conversion from Doubles to BigDecimals has improved a bit compared to Java scala> new java.math.BigDecimal(0.2) res0: java.math.BigDecimal = 0.20000000000000001110223024625156... scala> BigDecimal(0.2) res1: scala.math.BigDecimal = 0.2 and things like val numbers: List[BigDecimal] = List(1.2, 3.2, 0.7, 0.8, 1.1) work really well, wouldn't it be reasonable to have an implicit conversion like implicit def String2BigDecimal(s: String) = BigDecimal(s) available by default which can convert Strings to BigDecimals like this? val numbers: List[BigDecimal] = List("1.2", "3.2", "0.7", "0.8", "1.1") Or am I missing something and Scala resolved all "problems" of Java with using the BigDecimal constructor with a floating point value instead of a String, and BigDecimal(String) is basically not needed anymore in Scala?

    Read the article

  • Generic structure for performing string conversion when data binding.

    - by Rohan West
    Hi there, a little while ago i was reading an article about a series of class that were created that handled the conversion of strings into a generic type. Below is a mock class structure. Basically if you set the StringValue it will perform some conversion into type T public class MyClass<T> { public string StringValue {get;set;} public T Value {get;set;} } I cannot remember the article that i was reading, or the name of the class i was reading about. Is this already implemented in the framework? Or shall i create my own?

    Read the article

  • MultipleHiddenInput doesn't encode properly over POST?

    - by andrew
    The form looks very simple: class MyForm(forms.Form): ids = forms.MultipleChoiceField(widget=forms.MultipleHiddenInput()) def view(request): ... form = MyForm(initial={'ids': [o.id for o in queryset]}) Which gives me the HTML (which looks good enough): <form method="post" action="/foo/bar/"> <input type="hidden" name="ids" value="7720889" id="id_ids_0"> <input type="hidden" name="ids" value="7717962" id="id_ids_1"> <input type="hidden" name="ids" value="7717807" id="id_ids_2"> <input type="hidden" name="ids" value="7713584" id="id_ids_3"> <input type="hidden" name="ids" value="7712277" id="id_ids_4"> <input type="hidden" name="ids" value="7707475" id="id_ids_5"> <input type="hidden" name="ids" value="7707257" id="id_ids_6"> <input type="hidden" name="ids" value="7705271" id="id_ids_7"> <input type="hidden" name="ids" value="7704338" id="id_ids_8"> <input type="hidden" name="ids" value="7704137" id="id_ids_9"> <input type="hidden" name="ids" value="7695444" id="id_ids_10"> <input type="hidden" name="ids" value="7695242" id="id_ids_11"> <input type="hidden" name="ids" value="7690683" id="id_ids_12"> <input type="hidden" name="ids" value="7690431" id="id_ids_13"> <input type="hidden" name="ids" value="7689035" id="id_ids_14"> <input type="hidden" name="ids" value="7681230" id="id_ids_15"> <input type="hidden" name="ids" value="7679189" id="id_ids_16"> <input type="hidden" name="ids" value="7675315" id="id_ids_17"> <input type="hidden" name="ids" value="7667291" id="id_ids_18"> <input type="hidden" name="ids" value="7661162" id="id_ids_19"> <button type="submit">Test</button> </form> But, in the POST that comes in, I'm only getting one value: <QueryDict: {u'ids': [u'7661162']}> What gives? What am I doing wrong?

    Read the article

  • Why is sql server giving a conversion error when submitting date.today to a datetime column?

    - by kpierce8
    I am getting a conversion error every time I try to submit a date value to sql server. The column in sql server is a datetime and in vb I'm using Date.today to pass to my parameterized query. I keep getting a sql exception Conversion failed when converting datetime from character string. Here's the code Public Sub ResetOrder(ByVal connectionString As String) Dim strSQL As String Dim cn As New SqlConnection(connectionString) cn.Open() strSQL = "DELETE Tasks WHERE ProjID = @ProjectID" Dim cmd As New SqlCommand(strSQL, cn) cmd.Parameters.AddWithValue("ProjectID", 5) cmd.ExecuteNonQuery() strSQL = "INSERT INTO Tasks (ProjID, DueDate, TaskName) VALUES " & _ " (@ProjID, @TaskName, @DueDate)" Dim cmd2 As New SqlCommand(strSQL, cn) cmd2.CommandText = strSQL cmd2.Parameters.AddWithValue("ProjID", 5) cmd2.Parameters.AddWithValue("DueDate", Date.Today) cmd2.Parameters.AddWithValue("TaskName", "bob") cmd2.ExecuteNonQuery() cn.Close() DataGridView1.DataSource = ds.Projects DataGridView2.DataSource = ds.Tasks End Sub Any thoughts would be greatly appreciated.

    Read the article

  • onchange event is not woking

    - by Jim
    Hello guys, Whats wrong with this code? My first function working fine but second function is not working. Please help me. Thanks function bag1() { var sum = 0; var qty = document.form1.qty.value; var pack = document.form1.pack.value; document.form1.bag.value = (qty*pack)/100; } function bag2() { var sum = 0; var qty = document.form1.qty.value; var pack1 = document.form1.pack1.value; document.form1.bag1.value = (qty*pack1)/100; } Here is HTML code <input id="item1" name="item1" type="text" maxlength="255" value=""/> <input type="text" maxlength="255" value=""/> <input id="weight" name="weight" type="text" maxlength="255" value=""/> <input id="pack" name="pack" type="text" maxlength="255" onchange="bag1();" value=""/> <input id="bag" name="bag" type="text" maxlength="255" value=""/> <input id="rate" name="rate" type="text" maxlength="255" value=""/> <input id="amount" name="amount" type="text" maxlength="255" value=""/> <input id="element_12" name="type" type="text" maxlength="255" value=""/> <input id="element_13" name="brand" type="text" maxlength="255" value=""/> <!-------------------------- Caculating 2th Row ---------------------------------------------------------------> <input type="text" maxlength="255" value=""/> <input type="text" maxlength="255" value=""/> <input id="weight2" name="weight2" type="text" maxlength="255" value=""/> <input id="pack2" name="pack2" type="text" maxlength="255" value=""/> <input id="bag2" name="bag2" type="text" maxlength="255" onchange="bag2();" value=""/> <input id="rate2" name="rate2" type="text" maxlength="255" value=""/> <input id="amount2" name="amount2" type="text" maxlength="255" value=""/> <input id="element_12" name="type" type="text" maxlength="255" value=""/> <input id="element_13" name="brand" type="text" maxlength="255" value=""/> </form type="submit" value="calculator2">

    Read the article

  • How change Castor mapping to remove "xmlns:xsi" and "xsi:type" attributes from element in XML output

    - by Derek Mahar
    How do I change the Castor mapping <?xml version="1.0"?> <!DOCTYPE mapping PUBLIC "-//EXOLAB/Castor Mapping DTD Version 1.0//EN" "http://castor.org/mapping.dtd"> <mapping> <class name="java.util.ArrayList" auto-complete="true"> <map-to xml="ArrayList" /> </class> <class name="com.db.spgit.abstrack.ws.response.UserResponse"> <map-to xml="UserResponse" /> <field name="id" type="java.lang.String"> <bind-xml name="id" node="element" /> </field> <field name="deleted" type="boolean"> <bind-xml name="deleted" node="element" /> </field> <field name="name" type="java.lang.String"> <bind-xml name="name" node="element" /> </field> <field name="typeId" type="java.lang.Integer"> <bind-xml name="typeId" node="element" /> </field> <field name="regionId" type="java.lang.Integer"> <bind-xml name="regionId" node="element" /> </field> <field name="regionName" type="java.lang.String"> <bind-xml name="regionName" node="element" /> </field> </class> </mapping> to suppress the xmlns:xsi and xsi:type attributes in the element of the XML output? For example, instead of the output XML <?xml version="1.0" encoding="UTF-8"?> <ArrayList> <UserResponse xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="UserResponse"> <name>Tester</name> <typeId>1</typeId> <regionId>2</regionId> <regionName>US</regionName> </UserResponse> </ArrayList> I'd prefer <?xml version="1.0" encoding="UTF-8"?> <ArrayList> <UserResponse> <name>Tester</name> <typeId>1</typeId> <regionId>2</regionId> <regionName>US</regionName> </UserResponse> </ArrayList> such that the element name implies the xsi:type.

    Read the article

  • What is the ISO C++ way to directly define a conversion function to reference to array?

    - by ben
    According to the standard, a conversion function has a function-id operator conversion-type-id, which would look like, say, operator char(&)[4] I believe. But I cannot figure out where to put the function parameter list. gcc does not accept either of operator char(&())[4] or operator char(&)[4]() or anything I can think of. Now, gcc seems to accept (&operator char ())[4] but clang does not, and I am inclined to not either, since it does not seem to fit the grammar as I understand it. I do not want to use a typedef because I want to avoid polluting the namespace with it.

    Read the article

  • How would you measure conversion for an iPhone App Download?

    - by Eran Kampf
    I want to test how users convert from my web page to downloading my iPhone app. Right now the best funnel I figured out was to go through a page that pings Google Analytics and then redirects to an iTunes link. But this funnel only measures conversion for users who got redirected to iTunes and not if they actually downloaded the app once they saw it in iTunes. Anyone knows of a way to measure conversion to the actual download? (I know this is not a coding question but still a problem a app developers would encounter trying to market their app)

    Read the article

  • How to identify each parameter type in a C# method?

    - by user465876
    I have a C# method say: MyMethod(int num, string name, Color color, MyComplexType complex) Using reflection, how can I distinctly identify each of the parameter types of any method? I want to perform some task by parameter type. If the type is simple int, string or boolean then I do something, if it is Color, XMLDocument, etc I do something else and if it is user defined type like MyComplexType or MyCalci etc then I want to do certain task. I am able to retrieve all the parameters of a method using ParameterInfo and can loop through each parameter and get their types. But how can I identify each data type? foreach (var parameter in parameters) { //identify primitive types?? //identify value types //identify reference types } Edit: this is apart of my code to create a propert grid sort of page where I want to show the parameter list with data types for the selected method. If the parameter has any userdefined type/reference type then I want to expand it further to show all the elements under it with datatypes.

    Read the article

  • Dynamic Types and DynamicObject References in C#

    - by Rick Strahl
    I've been working a bit with C# custom dynamic types for several customers recently and I've seen some confusion in understanding how dynamic types are referenced. This discussion specifically centers around types that implement IDynamicMetaObjectProvider or subclass from DynamicObject as opposed to arbitrary type casts of standard .NET types. IDynamicMetaObjectProvider types  are treated special when they are cast to the dynamic type. Assume for a second that I've created my own implementation of a custom dynamic type called DynamicFoo which is about as simple of a dynamic class that I can think of:public class DynamicFoo : DynamicObject { Dictionary<string, object> properties = new Dictionary<string, object>(); public string Bar { get; set; } public DateTime Entered { get; set; } public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (!properties.ContainsKey(binder.Name)) return false; result = properties[binder.Name]; return true; } public override bool TrySetMember(SetMemberBinder binder, object value) { properties[binder.Name] = value; return true; } } This class has an internal dictionary member and I'm exposing this dictionary member through a dynamic by implementing DynamicObject. This implementation exposes the properties dictionary so the dictionary keys can be referenced like properties (foo.NewProperty = "Cool!"). I override TryGetMember() and TrySetMember() which are fired at runtime every time you access a 'property' on a dynamic instance of this DynamicFoo type. Strong Typing and Dynamic Casting I now can instantiate and use DynamicFoo in a couple of different ways: Strong TypingDynamicFoo fooExplicit = new DynamicFoo(); var fooVar = new DynamicFoo(); These two commands are essentially identical and use strong typing. The compiler generates identical code for both of them. The var statement is merely a compiler directive to infer the type of fooVar at compile time and so the type of fooExplicit is DynamicFoo, just like fooExplicit. This is very static - nothing dynamic about it - and it completely ignores the IDynamicMetaObjectProvider implementation of my class above as it's never used. Using either of these I can access the native properties:DynamicFoo fooExplicit = new DynamicFoo();// static typing assignmentsfooVar.Bar = "Barred!"; fooExplicit.Entered = DateTime.Now; // echo back static values Console.WriteLine(fooVar.Bar); Console.WriteLine(fooExplicit.Entered); but I have no access whatsoever to the properties dictionary. Basically this creates a strongly typed instance of the type with access only to the strongly typed interface. You get no dynamic behavior at all. The IDynamicMetaObjectProvider features don't kick in until you cast the type to dynamic. If I try to access a non-existing property on fooExplicit I get a compilation error that tells me that the property doesn't exist. Again, it's clearly and utterly non-dynamic. Dynamicdynamic fooDynamic = new DynamicFoo(); fooDynamic on the other hand is created as a dynamic type and it's a completely different beast. I can also create a dynamic by simply casting any type to dynamic like this:DynamicFoo fooExplicit = new DynamicFoo(); dynamic fooDynamic = fooExplicit; Note that dynamic typically doesn't require an explicit cast as the compiler automatically performs the cast so there's no need to use as dynamic. Dynamic functionality works at runtime and allows for the dynamic wrapper to look up and call members dynamically. A dynamic type will look for members to access or call in two places: Using the strongly typed members of the object Using theIDynamicMetaObjectProvider Interface methods to access members So rather than statically linking and calling a method or retrieving a property, the dynamic type looks up - at runtime  - where the value actually comes from. It's essentially late-binding which allows runtime determination what action to take when a member is accessed at runtime *if* the member you are accessing does not exist on the object. Class members are checked first before IDynamicMetaObjectProvider interface methods are kick in. All of the following works with the dynamic type:dynamic fooDynamic = new DynamicFoo(); // dynamic typing assignments fooDynamic.NewProperty = "Something new!"; fooDynamic.LastAccess = DateTime.Now; // dynamic assigning static properties fooDynamic.Bar = "dynamic barred"; fooDynamic.Entered = DateTime.Now; // echo back dynamic values Console.WriteLine(fooDynamic.NewProperty); Console.WriteLine(fooDynamic.LastAccess); Console.WriteLine(fooDynamic.Bar); Console.WriteLine(fooDynamic.Entered); The dynamic type can access the native class properties (Bar and Entered) and create and read new ones (NewProperty,LastAccess) all using a single type instance which is pretty cool. As you can see it's pretty easy to create an extensible type this way that can dynamically add members at runtime dynamically. The Alter Ego of IDynamicObject The key point here is that all three statements - explicit, var and dynamic - declare a new DynamicFoo(), but the dynamic declaration results in completely different behavior than the first two simply because the type has been cast to dynamic. Dynamic binding means that the type loses its typical strong typing, compile time features. You can see this easily in the Visual Studio code editor. As soon as you assign a value to a dynamic you lose Intellisense and you see which means there's no Intellisense and no compiler type checking on any members you apply to this instance. If you're new to the dynamic type it might seem really confusing that a single type can behave differently depending on how it is cast, but that's exactly what happens when you use a type that implements IDynamicMetaObjectProvider. Declare the type as its strong type name and you only get to access the native instance members of the type. Declare or cast it to dynamic and you get dynamic behavior which accesses native members plus it uses IDynamicMetaObjectProvider implementation to handle any missing member definitions by running custom code. You can easily cast objects back and forth between dynamic and the original type:dynamic fooDynamic = new DynamicFoo(); fooDynamic.NewProperty = "New Property Value"; DynamicFoo foo = fooDynamic; foo.Bar = "Barred"; Here the code starts out with a dynamic cast and a dynamic assignment. The code then casts back the value to the DynamicFoo. Notice that when casting from dynamic to DynamicFoo and back we typically do not have to specify the cast explicitly - the compiler can induce the type so I don't need to specify as dynamic or as DynamicFoo. Moral of the Story This easy interchange between dynamic and the underlying type is actually super useful, because it allows you to create extensible objects that can expose non-member data stores and expose them as an object interface. You can create an object that hosts a number of strongly typed properties and then cast the object to dynamic and add additional dynamic properties to the same type at runtime. You can easily switch back and forth between the strongly typed instance to access the well-known strongly typed properties and to dynamic for the dynamic properties added at runtime. Keep in mind that dynamic object access has quite a bit of overhead and is definitely slower than strongly typed binding, so if you're accessing the strongly typed parts of your objects you definitely want to use a strongly typed reference. Reserve dynamic for the dynamic members to optimize your code. The real beauty of dynamic is that with very little effort you can build expandable objects or objects that expose different data stores to an object interface. I'll have more on this in my next post when I create a customized and extensible Expando object based on DynamicObject.© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL SERVER – Using expressor Composite Types to Enforce Business Rules

    - by pinaldave
    One of the features that distinguish the expressor Data Integration Platform from other products in the data integration space is its concept of composite types, which provide an effective and easily reusable way to clearly define the structure and characteristics of data within your application.  An important feature of the composite type approach is that it allows you to easily adjust the content of a record to its ultimate purpose.  For example, a record used to update a row in a database table is easily defined to include only the minimum set of columns, that is, a value for the key column and values for only those columns that need to be updated. Much like a class in higher level programming languages, you can also use the composite type as a way to enforce business rules onto your data by encapsulating a datum’s name, data type, and constraints (for example, maximum, minimum, or acceptable values) as a single entity, which ensures that your data can not assume an invalid value.  To what extent you use this functionality is a decision you make when designing your application; the expressor design paradigm does not force this approach on you. Let’s take a look at how these features are used.  Suppose you want to create a group of applications that maintain the employee table in your human resources database. Your table might have a structure similar to the HumanResources.Employee table in the AdventureWorks database.  This table includes two columns, EmployeID and rowguid, that are maintained by the relational database management system; you cannot provide values for these columns when inserting new rows into the table. Additionally, there are columns such as VacationHours and SickLeaveHours that you might choose to update for all employees on a monthly basis, which justifies creation of a dedicated application. By creating distinct composite types for the read, insert and update operations against this table, you can more easily manage this table’s content. When developing this application within expressor Studio, your first task is to create a schema artifact for the database table.  This process is completely driven by a wizard, only requiring that you select the desired database schema and table.  The resulting schema artifact defines the mapping of result set records to a record within the expressor data integration application.  The structure of the record within the expressor application is a composite type that is given the default name CompositeType1.  As you can see in the following figure, all columns from the table are included in the result set and mapped to an identically named attribute in the default composite type. If you are developing an application that needs to read this table, perhaps to prepare a year-end report of employees by department, you would probably not be interested in the data in the rowguid and ModifiedDate columns.  A typical approach would be to drop this unwanted data in a downstream operator.  But using an alternative composite type provides a better approach in which the unwanted data never enters your application. While working in expressor  Studio’s schema editor, simply create a second composite type within the same schema artifact, which you could name ReadTable, and remove the attributes corresponding to the unwanted columns. The value of an alternative composite type is even more apparent when you want to insert into or update the table.  In the composite type used to insert rows, remove the attributes corresponding to the EmployeeID primary key and rowguid uniqueidentifier columns since these values are provided by the relational database management system. And to update just the VacationHours and SickLeaveHours columns, use a composite type that includes only the attributes corresponding to the EmployeeID, VacationHours, SickLeaveHours and ModifiedDate columns. By specifying this schema artifact and composite type in a Write Table operator, your upstream application need only deal with the four required attributes and there is no risk of unintentionally overwriting a value in a column that does not need to be updated. Now, what about the option to use the composite type to enforce business rules?  If you review the composition of the default composite type CompositeType1, you will note that the constraints defined for many of the attributes mirror the table column specifications.  For example, the maximum number of characters in the NationaIDNumber, LoginID and Title attributes is equivalent to the maximum width of the target column, and the size of the MaritalStatus and Gender attributes is limited to a single character as required by the table column definition.  If your application code leads to a violation of these constraints, an error will be raised.  The expressor design paradigm then allows you to handle the error in a way suitable for your application.  For example, a string value could be truncated or a numeric value could be rounded. Moreover, you have the option of specifying additional constraints that support business rules unrelated to the table definition. Let’s assume that the only acceptable values for marital status are S, M, and D.  Within the schema editor, double-click on the MaritalStatus attribute to open the Edit Attribute window.  Then click the Allowed Values checkbox and enter the acceptable values into the Constraint Value text box. The schema editor is updated accordingly. There is one more option that the expressor semantic type paradigm supports.  Since the MaritalStatus attribute now clearly specifies how this type of information should be represented (a single character limited to S, M or D), you can convert this attribute definition into a shared type, which will allow you to quickly incorporate this definition into another composite type or into the description of an output record from a transform operator. Again, double-click on the MaritalStatus attribute and in the Edit Attribute window, click Convert, which opens the Share Local Semantic Type window that you use to name this shared type.  There’s no requirement that you give the shared type the same name as the attribute from which it was derived.  You should supply a name that makes it obvious what the shared type represents. In this posting, I’ve overviewed the expressor semantic type paradigm and shown how it can be used to make your application development process more productive.  The beauty of this feature is that you choose when and to what extent you utilize the functionality, but I’m certain that if you opt to follow this approach your efforts will become more efficient and your work will progress more quickly.  As always, I encourage you to download and evaluate expressor Studio for your current and future data integration needs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >