Search Results

Search found 29956 results on 1199 pages for 'query builder methods'.

Page 292/1199 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • Enhanced REST Support in Oracle Service Bus 11gR1

    - by jeff.x.davies
    In a previous entry on REST and Oracle Service Bus (see http://blogs.oracle.com/jeffdavies/2009/06/restful_services_with_oracle_s_1.html) I encoded the REST query string really as part of the relative URL. For example, consider the following URI: http://localhost:7001/SimpleREST/Products/id=1234 Now, technically there is nothing wrong with this approach. However, it is generally more common to encode the search parameters into the query string. Take a look at the following URI that shows this principle http://localhost:7001/SimpleREST/Products?id=1234 At first blush this appears to be a trivial change. However, this approach is more intuitive, especially if you are passing in multiple parameters. For example: http://localhost:7001/SimpleREST/Products?cat=electronics&subcat=television&mfg=sony The above URI is obviously used to retrieve a list of televisions made by Sony. In prior versions of OSB (before 11gR1PS3), parsing the query string of a URI was more difficult than in the current release. In 11gR1PS3 it is now much easier to parse the query strings, which in turn makes developing REST services in OSB even easier. In this blog entry, we will re-implement the REST-ful Products services using query strings for passing parameter information. Lets begin with the implementation of the Products REST service. This service is implemented in the Products.proxy file of the project. Lets begin with the overall structure of the service, as shown in the following screenshot. This is a common pattern for REST services in the Oracle Service Bus. You implement different flows for each of the HTTP verbs that you want your service to support. Lets take a look at how the GET verb is implemented. This is the path that is taken of you were to point your browser to: http://localhost:7001/SimpleREST/Products/id=1234 There is an Assign action in the request pipeline that shows how to extract a query parameter. Here is the expression that is used to extract the id parameter: $inbound/ctx:transport/ctx:request/http:query-parameters/http:parameter[@name="id"]/@value The Assign action that stores the value into an OSB variable named id. Using this type of XPath statement you can query for any variables by name, without regard to their order in the parameter list. The Log statement is there simply to provided some debugging info in the OSB server console. The response pipeline contains a Replace action that constructs the response document for our rest service. Most of the response data is static, but the ID field that is returned is set based upon the query-parameter that was passed into the REST proxy. Testing the REST service with a browser is very simple. Just point it to the URL I showed you earlier. However, the browser is really only good for testing simple GET services. The OSB Test Console provides a much more robust environment for testing REST services, no matter which HTTP verb is used. Lets see how to use the Test Console to test this GET service. Open the OSB we console (http://localhost:7001/sbconsole) and log in as the administrator. Click on the Test Console icon (the little "bug") next to the Products proxy service in the SimpleREST project. This will bring up the Test Console browser window. Unlike SOAP services, we don't need to do much work in the request document because all of our request information will be encoded into the URI of the service itself. Belore the Request Document section of the Test Console is the Transport section. Expand that section and modify the query-parameters and http-method fields as shown in the next screenshot. By default, the query-parameters field will have the tags already defined. You just need to add a tag for each parameter you want to pass into the service. For out purposes with this particular call, you'd set the quer-parameters field as follows: <tp:parameter name="id" value="1234" /> </tp:query-parameters> Now you are ready to push the Execute button to see the results of the call. That covers the process for parsing query parameters using OSB. However, what if you have an OSB proxy service that needs to consume a REST-ful service? How do you tell OSB to pass the query parameters to the external service? In the sample code you will see a 2nd proxy service called CallREST. It invokes the Products proxy service in exactly the same way it would invoke any REST service. Our CallREST proxy service is defined as a SOAP service. This help to demonstrate OSBs ability to mediate between service consumers and service providers, decreasing the level of coupling between them. If you examine the message flow for the CallREST proxy service, you'll see that it uses an Operational branch to isolate processing logic for each operation that is defined by the SOAP service. We will focus on the getProductDetail branch, that calls the Products REST service using the HTTP GET verb. Expand the getProduct pipeline and the stage node that it contains. There is a single Assign statement that simply extracts the productID from the SOA request and stores it in a local OSB variable. Nothing suprising here. The real work (and the real learning) occurs in the Route node below the pipeline. The first thing to learn is that you need to use a route node when calling REST services, not a Service Callout or a Publish action. That's because only the Routing action has access to the $oubound variable, especially when invoking a business service. The Routing action contains 3 Insert actions. The first Insert action shows how to specify the HTTP verb as a GET. The second insert action simply inserts the XML node into the request. This element does not exist in the request by default, so we need to add it manually. Now that we have the element defined in our outbound request, we can fill it with the parameters that we want to send to the REST service. In the following screenshot you can see how we define the id parameter based on the productID value we extracted earlier from the SOAP request document. That expression will look for the parameter that has the name id and extract its value. That's all there is to it. You now know how to take full advantage of the query parameter parsing capability of the Oracle Service Bus 11gR1PS2. Download the sample source code here: rest2_sbconfig.jar Ubuntu and the OSB Test Console You will get an error when you try to use the Test Console with the Oracle Service Bus, using Ubuntu (or likely a number of other Linux distros also). The error (shown below) will state that the Test Console service is not running. The fix for this problem is quite simple. Open up the WebLogic Server administrator console (usually running at http://localhost:7001/console). In the Domain Structure window on the left side of the console, select the Servers entry under the Environment heading. The select the Admin Server entry in the main window of the console. By default, you should be viewing the Configuration tabe and the General sub tab in the main window. Look for the Listen Address field. By default it is blank, which means it is listening on all interfaces. For some reason Ubuntu doesn't like this. So enter a value like localhost or the specific IP address or DNS name for your server (usually its just localhost in development envirionments). Save your changes and restart the server. Your Test Console will now work correctly.

    Read the article

  • Grandparent – Parent – Child Reports in SQL Developer

    - by thatjeffsmith
    You’ll never see one of these family stickers on my car, but I promise not to judge…much. Parent – Child reports are pretty straightforward in Oracle SQL Developer. You have a ‘parent’ report, and then one or more ‘child’ reports which are based off of a value in a selected row or value from the parent. If you need a quick tutorial to get up to speed on the subject, go ahead and take 5 minutes Shortly before I left for vacation 2 weeks agao, I got an interesting question from one of my Twitter Followers: @thatjeffsmith any luck with the #Oracle awr reports in #SQLDeveloper?This is easy with multi generation parent>child Done in #dbvisualizer — Ronald Rood (@Ik_zelf) August 26, 2012 Now that I’m back from vacation, I can tell Ronald and everyone else that the answer is ‘Yes!’ And here’s how Time to Get Out Your XML Editor Don’t have one? That’s OK, SQL Developer can edit XML files. While the Reporting interface doesn’t surface the ability to create multi-generational reports, the underlying code definitely supports it. We just need to hack away at the XML that powers a report. For this example I’m going to start simple. A query that brings back DEPARTMENTs, then EMPLOYEES, then JOBs. We can build the first two parts of the report using the report editor. A Parent-Child report in Oracle SQL Developer (Departments – Employees) Save the Report to XML Once you’ve generated the XML file, open it with your favorite XML editor. For this example I’ll be using the build-it XML editor in SQL Developer. SQL Developer Reports in their raw XML glory! Right after the PDF element in the XML document, we can start a new ‘child’ report by inserting a DISPLAY element. I just copied and pasted the existing ‘display’ down so I wouldn’t have to worry about screwing anything up. Note I also needed to change the ‘master’ name so it wouldn’t confuse SQL Developer when I try to import/open a report that has the same name. Also I needed to update the binds tags to reflect the names from the child versus the original parent report. This is pretty easy to figure out on your own actually – I mean I’m no real developer and I got it pretty quick. <?xml version="1.0" encoding="UTF-8" ?> <displays> <display id="92857fce-0139-1000-8006-7f0000015340" type="" style="Table" enable="true"> <name><![CDATA[Grandparent]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.departments]]></sql> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Parent]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.employees where department_id = EPARTMENT_ID]]></sql> <binds> <bind id="DEPARTMENT_ID"> <prompt><![CDATA[DEPARTMENT_ID]]></prompt> <tooltip><![CDATA[DEPARTMENT_ID]]></tooltip> <value><![CDATA[NULL_VALUE]]></value> </bind> </binds> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Child]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.jobs where job_id = :JOB_ID]]></sql> <binds> <bind id="JOB_ID"> <prompt><![CDATA[JOB_ID]]></prompt> <tooltip><![CDATA[JOB_ID]]></tooltip> <value><![CDATA[NULL_VALUE]]></value> </bind> </binds> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> </display> </display> </display> </displays> Save the file and ‘Open Report…’ You’ll see your new report name in the tree. You just need to double-click it to open it. Here’s what it looks like running A 3 generation family Now Let’s Build an AWR Text Report Ronald wanted to have the ability to query AWR snapshots and generate the AWR reports. That requires a few inputs, including a START and STOP snapshot ID. That basically tells AWR what time period to use for generating the report. And here’s where it gets tricky. We’ll need to use aliases for the SNAP_ID column. Since we’re using the same column name from 2 different queries, we need to use different bind variables. Fortunately for us, SQL Developer’s clever enough to use the column alias as the BIND. Here’s what I mean: Grandparent Query SELECT snap_id start1, begin_interval_time, end_interval_time FROM dba_hist_snapshot ORDER BY 1 asc Parent Query SELECT snap_id stop1, begin_interval_time, end_interval_time, :START1 carry FROM dba_hist_snapshot WHERE snap_id > :START1 ORDER BY 1 asc And here’s where it gets even trickier – you can’t reference a bind from outside the parent query. My grandchild report can’t reference a value from the grandparent report. So I just carry the selected value down to the parent. In my parent query SELECT you see the ‘:START1′ at the end? That’s making that value available to me when I use it in my grandchild query. To complicate things a bit further, I can’t have a column name with a ‘:’ in it, or SQL Developer will get confused when I try to reference the value of the variable with the ‘:’ – and ‘::Name’ doesn’t work. But that’s OK, just alias it. Grandchild Query Select Output From Table(Dbms_Workload_Repository.Awr_Report_Text(1298953802, 1,:CARRY, :STOP1)); Ok, and the last trick – I hard-coded my report to use my database’s DB_ID and INST_ID into the AWR package call. Now a smart person could figure out a way to make that work on any database, but I got lazy and and ran out of time. But this should be far enough for you to take it from here. Here’s what my report looks like now: Caution: don’t run this if you haven’t licensed Enterprise Edition with Diagnostic Pack. The Raw XML for this AWR Report <?xml version="1.0" encoding="UTF-8" ?> <displays> <display id="927ba96c-0139-1000-8001-7f0000015340" type="" style="Table" enable="true"> <name><![CDATA[AWR Start Stop Report Final]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[SELECT snap_id start1, begin_interval_time, end_interval_time FROM dba_hist_snapshot ORDER BY 1 asc]]></sql> </query> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Stop SNAP_ID]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[SELECT snap_id stop1, begin_interval_time, end_interval_time, :START1 carry FROM dba_hist_snapshot WHERE snap_id > :START1 ORDER BY 1 asc]]></sql> </query> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[AWR Report]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[Select Output From Table(Dbms_Workload_Repository.Awr_Report_Text(1298953802, 1,:CARRY, :STOP1 ))]]></sql> </query> </display> </display> </display> </displays> Should We Build Support for Multiple Levels of Reports into the User Interface? Let us know! A comment here or a suggestion on our SQL Developer Exchange might help your case!

    Read the article

  • What would be the Query to get exact same result like Windows 7 start menu search for Programs using Windows Search service?

    - by Somnath
    I would like to implement the same search application like Windows 7 using microsoft.search.interop.dll, C#. Currently I'm using System.Kind property to retrieve information regarding the programs from Windows Search but the results set does not look same like Windows 7 search. Order of items are different. SELECT TOP 3 System.ItemNameDisplay, System.DateAccessed FROM SystemIndex WHERE System.ItemNameDisplay LIKE 'ad%' AND (System.Kind='Program') What would be the Query to get exact same result like Windows 7 start menu search for Programs? As an example : search token = 'ad' Windows 7 search result Adobe Reader 9, Add a device, Adobe Photoshop 7.0 Search Result from my code Adobe ImageReady 7.0 , Adobe Photoshop 7.0 , Adobe Reader 9

    Read the article

  • How to access / query Team Foundation Server 2012 with Odata?

    - by cseder
    I've tried to find a solution for this for hours now, and I'm getting the same results in the end, asking me to install a lot of Azure and other stuff, plus running some example project .sln that I can't open with my 2012 version of Visual Studio. So, I'm pretty much stuck, and have some pretty straight forward questions regarding this: Does TFS 2012 include the Odata service in any way, so that I don't have to install it? If not, how can I install a NATIVE 2012 version of the Odata service for TFS 2012? Is it possible that I'm aiming for the wrong target here? I'm looking for a solution to the following: I have a TFS 2012 Server that I need to be able to create Work Items on programatically, based on data from our Help Desk system. Then I need to query these Work Items for changed status since its creation, and update the Help Desk Database. Am I better off using the "regular" TFS API? I was kinda thinking that the Odata way was more "future proof", but I'm not sure...

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • sql server procedure error

    - by Mohan
    CREATE PROCEDURE USP_SEARCH_HOTELS ( @Text varchar(50), @Type varchar(40) ) AS BEGIN Declare @Query VARCHAR(60) IF @Type = 'By Country' BEGIN SET @Query = 'Hotel.countryName like '+ @Text+'%' END ELSE IF @Type = 'By State' BEGIN SET @Query = 'HOTEL.stateName like '+ @Text+'%' END ELSE IF @Type='By Property Name' BEGIN SET @Query='hotel.propertyname like'+ @Text+'%' End ELSE IF @Type='By Rating' BEGIN SET @Query='hotel.starRating='+ Cast(@Text as INT) END ELSE IF @Type='By City' BEGIN SET @Query='hotel.cityName like '+ @Text+'%' END begin select * from hotel,tbl_cust_info where hotel.agentID=Tbl_Cust_Info.Cust_ID and (@Query) end END WHAT IS THE ERROR IN THIS PROCEDURE PLEASE HELP.

    Read the article

  • Killing Mysql prcoesses staying in sleep command.

    - by Shino88
    Hey I am connecting a MYSQL database through hibernate and i seem to have processes that are not being killed after they are finished in the session. I have called flush and close on each session but when i check the server the last processes are still there with a sleep command. This is a new problem which i am having and was not the case yesterday. Is there any way i can ensure the killng of theses processes when i am done with a session. Below is an example of one of my classes. public JSONObject check() { //creates a new session needed to add elements to a database Session session = null; //holds the result of the check in the database JSONObject check = new JSONObject(); try{ //creates a new session needed to add elements to a database SessionFactory sessionFactory = new Configuration().configure().buildSessionFactory(); session = sessionFactory.openSession(); if (justusername){ //query created to select a username from user table String hquery = "Select username from User user Where username = ? "; //query created Query query = session.createQuery(hquery); //sets the username of the query the values JSONObject contents query.setString(0, username); // executes query and adds username string variable String user = (String) query.uniqueResult(); //checks to see if result is found (null if not found) if (user == null) { //adds false to Jobject if not found check.put("indatabase", "false"); } else { check.put("indatabase", "true"); } //adds check to Jobject to say just to check username check.put("justusername", true); } else { //query created to select a username and password from user table String hquery = "Select username from User user Where username = :user and password = :pass "; Query query = session.createQuery(hquery); query.setString("user", username); query.setString("pass", password); String user = (String) query.uniqueResult(); if(user ==null) { check.put("indatabase", false); } else { check.put("indatabase", true); } check.put("justusername", false); } }catch(Exception e){ System.out.println(e.getMessage()); //logg.log(Level.WARNING, " Exception", e.getMessage()); }finally{ // Actual contact insertion will happen at this step session.flush(); session.close(); } //returns Jobject return check; }

    Read the article

  • Killing Mysql processes staying in sleep command.

    - by Shino88
    Hey I am connecting a MYSQL database through hibernate and i seem to have processes that are not being killed after they are finished in the session. I have called flush and close on each session but when i check the server the last processes are still there with a sleep command. This is a new problem which i am having and was not the case yesterday. Is there any way i can ensure the killng of theses processes when i am done with a session. Below is an example of one of my classes. public JSONObject check() { //creates a new session needed to add elements to a database Session session = null; //holds the result of the check in the database JSONObject check = new JSONObject(); try{ //creates a new session needed to add elements to a database SessionFactory sessionFactory = new Configuration().configure().buildSessionFactory(); session = sessionFactory.openSession(); if (justusername){ //query created to select a username from user table String hquery = "Select username from User user Where username = ? "; //query created Query query = session.createQuery(hquery); //sets the username of the query the values JSONObject contents query.setString(0, username); // executes query and adds username string variable String user = (String) query.uniqueResult(); //checks to see if result is found (null if not found) if (user == null) { //adds false to Jobject if not found check.put("indatabase", "false"); } else { check.put("indatabase", "true"); } //adds check to Jobject to say just to check username check.put("justusername", true); } else { //query created to select a username and password from user table String hquery = "Select username from User user Where username = :user and password = :pass "; Query query = session.createQuery(hquery); query.setString("user", username); query.setString("pass", password); String user = (String) query.uniqueResult(); if(user ==null) { check.put("indatabase", false); } else { check.put("indatabase", true); } check.put("justusername", false); } }catch(Exception e){ System.out.println(e.getMessage()); //logg.log(Level.WARNING, " Exception", e.getMessage()); }finally{ // Actual contact insertion will happen at this step session.flush(); session.close(); } //returns Jobject return check; }

    Read the article

  • mysql_query() returns returns true, but mysql_num_rows() and mysql_fetch_array() give "not a valid r

    - by zlance4012
    Here is the code in question: -----From index.php----- require_once('includes/DbConnector.php'); // Create an object (instance) of the DbConnector $connector = new DbConnector(); // Execute the query to retrieve articles $query1 = "SELECT id, title FROM articles ORDER BY id DESC LIMIT 0,5"; $result = $connector-query($query1); echo "vardump1:"; var_dump($result); echo "\n"; /(!line 17!)/ echo "Number of rows in the result of the query:".mysql_num_rows($result)."\n"; // Get an array containing the results. // Loop for each item in that array while ($row = $connector-fetchArray($result)){ echo ' '; echo $row['title']; echo ' '; -----end index.php----- -----included DbConnector.php----- $settings = SystemComponent::getSettings(); // Get the main settings from the array we just loaded $host = $settings['dbhost']; $db = $settings['dbname']; $user = $settings['dbusername']; $pass = $settings['dbpassword']; // Connect to the database $this-link = mysql_connect($host, $user, $pass); mysql_select_db($db); register_shutdown_function(array(&$this, 'close')); } //end constructor //* Function: query, Purpose: Execute a database query * function query($query) { echo "Query Statement: ".$query."\n"; $this-theQuery = $query; return mysql_query($query, $this-link) or die(mysql_error()); } //* Function: fetchArray, Purpose: Get array of query results * function fetchArray($result) { echo "<|"; var_dump($result); echo "| \n"; /(!line 50!)/$res= mysql_fetch_array($result) or die(mysql_error()); echo $res['id']."-".$res['title']."-".$res['imagelink']."-".$res['text']; return $res; } -----end DbConnector.php----- -----Output----- Query Statement: SELECT id, title FROM articles ORDER BY id DESC LIMIT 0,5 vardump1:bool(true) PHP Error Message Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource in /path to/index.php on line 17 Number of rows in the result of the query: <|bool(true) | PHP Error Message Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource in /path to/DbConnector.php on line 50

    Read the article

  • An Introduction to ASP.NET Web API

    - by Rick Strahl
    Microsoft recently released ASP.NET MVC 4.0 and .NET 4.5 and along with it, the brand spanking new ASP.NET Web API. Web API is an exciting new addition to the ASP.NET stack that provides a new, well-designed HTTP framework for creating REST and AJAX APIs (API is Microsoft’s new jargon for a service, in case you’re wondering). Although Web API ships and installs with ASP.NET MVC 4, you can use Web API functionality in any ASP.NET project, including WebForms, WebPages and MVC or just a Web API by itself. And you can also self-host Web API in your own applications from Console, Desktop or Service applications. If you're interested in a high level overview on what ASP.NET Web API is and how it fits into the ASP.NET stack you can check out my previous post: Where does ASP.NET Web API fit? In the following article, I'll focus on a practical, by example introduction to ASP.NET Web API. All the code discussed in this article is available in GitHub: https://github.com/RickStrahl/AspNetWebApiArticle [republished from my Code Magazine Article and updated for RTM release of ASP.NET Web API] Getting Started To start I’ll create a new empty ASP.NET application to demonstrate that Web API can work with any kind of ASP.NET project. Although you can create a new project based on the ASP.NET MVC/Web API template to quickly get up and running, I’ll take you through the manual setup process, because one common use case is to add Web API functionality to an existing ASP.NET application. This process describes the steps needed to hook up Web API to any ASP.NET 4.0 application. Start by creating an ASP.NET Empty Project. Then create a new folder in the project called Controllers. Add a Web API Controller Class Once you have any kind of ASP.NET project open, you can add a Web API Controller class to it. Web API Controllers are very similar to MVC Controller classes, but they work in any kind of project. Add a new item to this folder by using the Add New Item option in Visual Studio and choose Web API Controller Class, as shown in Figure 1. Figure 1: This is how you create a new Controller Class in Visual Studio   Make sure that the name of the controller class includes Controller at the end of it, which is required in order for Web API routing to find it. Here, the name for the class is AlbumApiController. For this example, I’ll use a Music Album model to demonstrate basic behavior of Web API. The model consists of albums and related songs where an album has properties like Name, Artist and YearReleased and a list of songs with a SongName and SongLength as well as an AlbumId that links it to the album. You can find the code for the model (and the rest of these samples) on Github. To add the file manually, create a new folder called Model, and add a new class Album.cs and copy the code into it. There’s a static AlbumData class with a static CreateSampleAlbumData() method that creates a short list of albums on a static .Current that I’ll use for the examples. Before we look at what goes into the controller class though, let’s hook up routing so we can access this new controller. Hooking up Routing in Global.asax To start, I need to perform the one required configuration task in order for Web API to work: I need to configure routing to the controller. Like MVC, Web API uses routing to provide clean, extension-less URLs to controller methods. Using an extension method to ASP.NET’s static RouteTable class, you can use the MapHttpRoute() (in the System.Web.Http namespace) method to hook-up the routing during Application_Start in global.asax.cs shown in Listing 1.using System; using System.Web.Routing; using System.Web.Http; namespace AspNetWebApi { public class Global : System.Web.HttpApplication { protected void Application_Start(object sender, EventArgs e) { RouteTable.Routes.MapHttpRoute( name: "AlbumVerbs", routeTemplate: "albums/{title}", defaults: new { symbol = RouteParameter.Optional, controller="AlbumApi" } ); } } } This route configures Web API to direct URLs that start with an albums folder to the AlbumApiController class. Routing in ASP.NET is used to create extensionless URLs and allows you to map segments of the URL to specific Route Value parameters. A route parameter, with a name inside curly brackets like {name}, is mapped to parameters on the controller methods. Route parameters can be optional, and there are two special route parameters – controller and action – that determine the controller to call and the method to activate respectively. HTTP Verb Routing Routing in Web API can route requests by HTTP Verb in addition to standard {controller},{action} routing. For the first examples, I use HTTP Verb routing, as shown Listing 1. Notice that the route I’ve defined does not include an {action} route value or action value in the defaults. Rather, Web API can use the HTTP Verb in this route to determine the method to call the controller, and a GET request maps to any method that starts with Get. So methods called Get() or GetAlbums() are matched by a GET request and a POST request maps to a Post() or PostAlbum(). Web API matches a method by name and parameter signature to match a route, query string or POST values. In lieu of the method name, the [HttpGet,HttpPost,HttpPut,HttpDelete, etc] attributes can also be used to designate the accepted verbs explicitly if you don’t want to follow the verb naming conventions. Although HTTP Verb routing is a good practice for REST style resource APIs, it’s not required and you can still use more traditional routes with an explicit {action} route parameter. When {action} is supplied, the HTTP verb routing is ignored. I’ll talk more about alternate routes later. When you’re finished with initial creation of files, your project should look like Figure 2.   Figure 2: The initial project has the new API Controller Album model   Creating a small Album Model Now it’s time to create some controller methods to serve data. For these examples, I’ll use a very simple Album and Songs model to play with, as shown in Listing 2. public class Song { public string AlbumId { get; set; } [Required, StringLength(80)] public string SongName { get; set; } [StringLength(5)] public string SongLength { get; set; } } public class Album { public string Id { get; set; } [Required, StringLength(80)] public string AlbumName { get; set; } [StringLength(80)] public string Artist { get; set; } public int YearReleased { get; set; } public DateTime Entered { get; set; } [StringLength(150)] public string AlbumImageUrl { get; set; } [StringLength(200)] public string AmazonUrl { get; set; } public virtual List<Song> Songs { get; set; } public Album() { Songs = new List<Song>(); Entered = DateTime.Now; // Poor man's unique Id off GUID hash Id = Guid.NewGuid().GetHashCode().ToString("x"); } public void AddSong(string songName, string songLength = null) { this.Songs.Add(new Song() { AlbumId = this.Id, SongName = songName, SongLength = songLength }); } } Once the model has been created, I also added an AlbumData class that generates some static data in memory that is loaded onto a static .Current member. The signature of this class looks like this and that's what I'll access to retrieve the base data:public static class AlbumData { // sample data - static list public static List<Album> Current = CreateSampleAlbumData(); /// <summary> /// Create some sample data /// </summary> /// <returns></returns> public static List<Album> CreateSampleAlbumData() { … }} You can check out the full code for the data generation online. Creating an AlbumApiController Web API shares many concepts of ASP.NET MVC, and the implementation of your API logic is done by implementing a subclass of the System.Web.Http.ApiController class. Each public method in the implemented controller is a potential endpoint for the HTTP API, as long as a matching route can be found to invoke it. The class name you create should end in Controller, which is how Web API matches the controller route value to figure out which class to invoke. Inside the controller you can implement methods that take standard .NET input parameters and return .NET values as results. Web API’s binding tries to match POST data, route values, form values or query string values to your parameters. Because the controller is configured for HTTP Verb based routing (no {action} parameter in the route), any methods that start with Getxxxx() are called by an HTTP GET operation. You can have multiple methods that match each HTTP Verb as long as the parameter signatures are different and can be matched by Web API. In Listing 3, I create an AlbumApiController with two methods to retrieve a list of albums and a single album by its title .public class AlbumApiController : ApiController { public IEnumerable<Album> GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); return albums; } public Album GetAlbum(string title) { var album = AlbumData.Current .SingleOrDefault(alb => alb.AlbumName.Contains(title)); return album; }} To access the first two requests, you can use the following URLs in your browser: http://localhost/aspnetWebApi/albumshttp://localhost/aspnetWebApi/albums/Dirty%20Deeds Note that you’re not specifying the actions of GetAlbum or GetAlbums in these URLs. Instead Web API’s routing uses HTTP GET verb to route to these methods that start with Getxxx() with the first mapping to the parameterless GetAlbums() method and the latter to the GetAlbum(title) method that receives the title parameter mapped as optional in the route. Content Negotiation When you access any of the URLs above from a browser, you get either an XML or JSON result returned back. The album list result for Chrome 17 and Internet Explorer 9 is shown Figure 3. Figure 3: Web API responses can vary depending on the browser used, demonstrating Content Negotiation in action as these two browsers send different HTTP Accept headers.   Notice that the results are not the same: Chrome returns an XML response and IE9 returns a JSON response. Whoa, what’s going on here? Shouldn’t we see the same result in both browsers? Actually, no. Web API determines what type of content to return based on Accept headers. HTTP clients, like browsers, use Accept headers to specify what kind of content they’d like to see returned. Browsers generally ask for HTML first, followed by a few additional content types. Chrome (and most other major browsers) ask for: Accept: text/html, application/xhtml+xml,application/xml; q=0.9,*/*;q=0.8 IE9 asks for: Accept: text/html, application/xhtml+xml, */* Note that Chrome’s Accept header includes application/xml, which Web API finds in its list of supported media types and returns an XML response. IE9 does not include an Accept header type that works on Web API by default, and so it returns the default format, which is JSON. This is an important and very useful feature that was missing from any previous Microsoft REST tools: Web API automatically switches output formats based on HTTP Accept headers. Nowhere in the server code above do you have to explicitly specify the output format. Rather, Web API determines what format the client is requesting based on the Accept headers and automatically returns the result based on the available formatters. This means that a single method can handle both XML and JSON results.. Using this simple approach makes it very easy to create a single controller method that can return JSON, XML, ATOM or even OData feeds by providing the appropriate Accept header from the client. By default you don’t have to worry about the output format in your code. Note that you can still specify an explicit output format if you choose, either globally by overriding the installed formatters, or individually by returning a lower level HttpResponseMessage instance and setting the formatter explicitly. More on that in a minute. Along the same lines, any content sent to the server via POST/PUT is parsed by Web API based on the HTTP Content-type of the data sent. The same formats allowed for output are also allowed on input. Again, you don’t have to do anything in your code – Web API automatically performs the deserialization from the content. Accessing Web API JSON Data with jQuery A very common scenario for Web API endpoints is to retrieve data for AJAX calls from the Web browser. Because JSON is the default format for Web API, it’s easy to access data from the server using jQuery and its getJSON() method. This example receives the albums array from GetAlbums() and databinds it into the page using knockout.js.$.getJSON("albums/", function (albums) { // make knockout template visible $(".album").show(); // create view object and attach array var view = { albums: albums }; ko.applyBindings(view); }); Figure 4 shows this and the next example’s HTML output. You can check out the complete HTML and script code at http://goo.gl/Ix33C (.html) and http://goo.gl/tETlg (.js). Figu Figure 4: The Album Display sample uses JSON data loaded from Web API.   The result from the getJSON() call is a JavaScript object of the server result, which comes back as a JavaScript array. In the code, I use knockout.js to bind this array into the UI, which as you can see, requires very little code, instead using knockout’s data-bind attributes to bind server data to the UI. Of course, this is just one way to use the data – it’s entirely up to you to decide what to do with the data in your client code. Along the same lines, I can retrieve a single album to display when the user clicks on an album. The response returns the album information and a child array with all the songs. The code to do this is very similar to the last example where we pulled the albums array:$(".albumlink").live("click", function () { var id = $(this).data("id"); // title $.getJSON("albums/" + id, function (album) { ko.applyBindings(album, $("#divAlbumDialog")[0]); $("#divAlbumDialog").show(); }); }); Here the URL looks like this: /albums/Dirty%20Deeds, where the title is the ID captured from the clicked element’s data ID attribute. Explicitly Overriding Output Format When Web API automatically converts output using content negotiation, it does so by matching Accept header media types to the GlobalConfiguration.Configuration.Formatters and the SupportedMediaTypes of each individual formatter. You can add and remove formatters to globally affect what formats are available and it’s easy to create and plug in custom formatters.The example project includes a JSONP formatter that can be plugged in to provide JSONP support for requests that have a callback= querystring parameter. Adding, removing or replacing formatters is a global option you can use to manipulate content. It’s beyond the scope of this introduction to show how it works, but you can review the sample code or check out my blog entry on the subject (http://goo.gl/UAzaR). If automatic processing is not desirable in a particular Controller method, you can override the response output explicitly by returning an HttpResponseMessage instance. HttpResponseMessage is similar to ActionResult in ASP.NET MVC in that it’s a common way to return an abstract result message that contains content. HttpResponseMessage s parsed by the Web API framework using standard interfaces to retrieve the response data, status code, headers and so on[MS2] . Web API turns every response – including those Controller methods that return static results – into HttpResponseMessage instances. Explicitly returning an HttpResponseMessage instance gives you full control over the output and lets you mostly bypass WebAPI’s post-processing of the HTTP response on your behalf. HttpResponseMessage allows you to customize the response in great detail. Web API’s attention to detail in the HTTP spec really shows; many HTTP options are exposed as properties and enumerations with detailed IntelliSense comments. Even if you’re new to building REST-based interfaces, the API guides you in the right direction for returning valid responses and response codes. For example, assume that I always want to return JSON from the GetAlbums() controller method and ignore the default media type content negotiation. To do this, I can adjust the output format and headers as shown in Listing 4.public HttpResponseMessage GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); // Create a new HttpResponse with Json Formatter explicitly var resp = new HttpResponseMessage(HttpStatusCode.OK); resp.Content = new ObjectContent<IEnumerable<Album>>( albums, new JsonMediaTypeFormatter()); // Get Default Formatter based on Content Negotiation //var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); resp.Headers.ConnectionClose = true; resp.Headers.CacheControl = new CacheControlHeaderValue(); resp.Headers.CacheControl.Public = true; return resp; } This example returns the same IEnumerable<Album> value, but it wraps the response into an HttpResponseMessage so you can control the entire HTTP message result including the headers, formatter and status code. In Listing 4, I explicitly specify the formatter using the JsonMediaTypeFormatter to always force the content to JSON.  If you prefer to use the default content negotiation with HttpResponseMessage results, you can create the Response instance using the Request.CreateResponse method:var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); This provides you an HttpResponse object that's pre-configured with the default formatter based on Content Negotiation. Once you have an HttpResponse object you can easily control most HTTP aspects on this object. What's sweet here is that there are many more detailed properties on HttpResponse than the core ASP.NET Response object, with most options being explicitly configurable with enumerations that make it easy to pick the right headers and response codes from a list of valid codes. It makes HTTP features available much more discoverable even for non-hardcore REST/HTTP geeks. Non-Serialized Results The output returned doesn’t have to be a serialized value but can also be raw data, like strings, binary data or streams. You can use the HttpResponseMessage.Content object to set a number of common Content classes. Listing 5 shows how to return a binary image using the ByteArrayContent class from a Controller method. [HttpGet] public HttpResponseMessage AlbumArt(string title) { var album = AlbumData.Current.FirstOrDefault(abl => abl.AlbumName.StartsWith(title)); if (album == null) { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found")); return resp; } // kinda silly - we would normally serve this directly // but hey - it's a demo. var http = new WebClient(); var imageData = http.DownloadData(album.AlbumImageUrl); // create response and return var result = new HttpResponseMessage(HttpStatusCode.OK); result.Content = new ByteArrayContent(imageData); result.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg"); return result; } The image retrieval from Amazon is contrived, but it shows how to return binary data using ByteArrayContent. It also demonstrates that you can easily return multiple types of content from a single controller method, which is actually quite common. If an error occurs - such as a resource can’t be found or a validation error – you can return an error response to the client that’s very specific to the error. In GetAlbumArt(), if the album can’t be found, we want to return a 404 Not Found status (and realistically no error, as it’s an image). Note that if you are not using HTTP Verb-based routing or not accessing a method that starts with Get/Post etc., you have to specify one or more HTTP Verb attributes on the method explicitly. Here, I used the [HttpGet] attribute to serve the image. Another option to handle the error could be to return a fixed placeholder image if no album could be matched or the album doesn’t have an image. When returning an error code, you can also return a strongly typed response to the client. For example, you can set the 404 status code and also return a custom error object (ApiMessageError is a class I defined) like this:return Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found") );   If the album can be found, the image will be returned. The image is downloaded into a byte[] array, and then assigned to the result’s Content property. I created a new ByteArrayContent instance and assigned the image’s bytes and the content type so that it displays properly in the browser. There are other content classes available: StringContent, StreamContent, ByteArrayContent, MultipartContent, and ObjectContent are at your disposal to return just about any kind of content. You can create your own Content classes if you frequently return custom types and handle the default formatter assignments that should be used to send the data out . Although HttpResponseMessage results require more code than returning a plain .NET value from a method, it allows much more control over the actual HTTP processing than automatic processing. It also makes it much easier to test your controller methods as you get a response object that you can check for specific status codes and output messages rather than just a result value. Routing Again Ok, let’s get back to the image example. Using the original routing we have setup using HTTP Verb routing there's no good way to serve the image. In order to return my album art image I’d like to use a URL like this: http://localhost/aspnetWebApi/albums/Dirty%20Deeds/image In order to create a URL like this, I have to create a new Controller because my earlier routes pointed to the AlbumApiController using HTTP Verb routing. HTTP Verb based routing is great for representing a single set of resources such as albums. You can map operations like add, delete, update and read easily using HTTP Verbs. But you cannot mix action based routing into a an HTTP Verb routing controller - you can only map HTTP Verbs and each method has to be unique based on parameter signature. You can't have multiple GET operations to methods with the same signature. So GetImage(string id) and GetAlbum(string title) are in conflict in an HTTP GET routing scenario. In fact, I was unable to make the above Image URL work with any combination of HTTP Verb plus Custom routing using the single Albums controller. There are number of ways around this, but all involve additional controllers.  Personally, I think it’s easier to use explicit Action routing and then add custom routes if you need to simplify your URLs further. So in order to accommodate some of the other examples, I created another controller – AlbumRpcApiController – to handle all requests that are explicitly routed via actions (/albums/rpc/AlbumArt) or are custom routed with explicit routes defined in the HttpConfiguration. I added the AlbumArt() method to this new AlbumRpcApiController class. For the image URL to work with the new AlbumRpcApiController, you need a custom route placed before the default route from Listing 1.RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); Now I can use either of the following URLs to access the image: Custom route: (/albums/rpc/{title}/image)http://localhost/aspnetWebApi/albums/PowerAge/image Action route: (/albums/rpc/action/{title})http://localhost/aspnetWebAPI/albums/rpc/albumart/PowerAge Sending Data to the Server To send data to the server and add a new album, you can use an HTTP POST operation. Since I’m using HTTP Verb-based routing in the original AlbumApiController, I can implement a method called PostAlbum()to accept a new album from the client. Listing 6 shows the Web API code to add a new album.public HttpResponseMessage PostAlbum(Album album) { if (!this.ModelState.IsValid) { // my custom error class var error = new ApiMessageError() { message = "Model is invalid" }; // add errors into our client error model for client foreach (var prop in ModelState.Values) { var modelError = prop.Errors.FirstOrDefault(); if (!string.IsNullOrEmpty(modelError.ErrorMessage)) error.errors.Add(modelError.ErrorMessage); else error.errors.Add(modelError.Exception.Message); } return Request.CreateResponse<ApiMessageError>(HttpStatusCode.Conflict, error); } // update song id which isn't provided foreach (var song in album.Songs) song.AlbumId = album.Id; // see if album exists already var matchedAlbum = AlbumData.Current .SingleOrDefault(alb => alb.Id == album.Id || alb.AlbumName == album.AlbumName); if (matchedAlbum == null) AlbumData.Current.Add(album); else matchedAlbum = album; // return a string to show that the value got here var resp = Request.CreateResponse(HttpStatusCode.OK, string.Empty); resp.Content = new StringContent(album.AlbumName + " " + album.Entered.ToString(), Encoding.UTF8, "text/plain"); return resp; } The PostAlbum() method receives an album parameter, which is automatically deserialized from the POST buffer the client sent. The data passed from the client can be either XML or JSON. Web API automatically figures out what format it needs to deserialize based on the content type and binds the content to the album object. Web API uses model binding to bind the request content to the parameter(s) of controller methods. Like MVC you can check the model by looking at ModelState.IsValid. If it’s not valid, you can run through the ModelState.Values collection and check each binding for errors. Here I collect the error messages into a string array that gets passed back to the client via the result ApiErrorMessage object. When a binding error occurs, you’ll want to return an HTTP error response and it’s best to do that with an HttpResponseMessage result. In Listing 6, I used a custom error class that holds a message and an array of detailed error messages for each binding error. I used this object as the content to return to the client along with my Conflict HTTP Status Code response. If binding succeeds, the example returns a string with the name and date entered to demonstrate that you captured the data. Normally, a method like this should return a Boolean or no response at all (HttpStatusCode.NoConent). The sample uses a simple static list to hold albums, so once you’ve added the album using the Post operation, you can hit the /albums/ URL to see that the new album was added. The client jQuery code to call the POST operation from the client with jQuery is shown in Listing 7. var id = new Date().getTime().toString(); var album = { "Id": id, "AlbumName": "Power Age", "Artist": "AC/DC", "YearReleased": 1977, "Entered": "2002-03-11T18:24:43.5580794-10:00", "AlbumImageUrl": http://ecx.images-amazon.com/images/…, "AmazonUrl": http://www.amazon.com/…, "Songs": [ { "SongName": "Rock 'n Roll Damnation", "SongLength": 3.12}, { "SongName": "Downpayment Blues", "SongLength": 4.22 }, { "SongName": "Riff Raff", "SongLength": 2.42 } ] } $.ajax( { url: "albums/", type: "POST", contentType: "application/json", data: JSON.stringify(album), processData: false, beforeSend: function (xhr) { // not required since JSON is default output xhr.setRequestHeader("Accept", "application/json"); }, success: function (result) { // reload list of albums page.loadAlbums(); }, error: function (xhr, status, p3, p4) { var err = "Error"; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); The code in Listing 7 creates an album object in JavaScript to match the structure of the .NET Album class. This object is passed to the $.ajax() function to send to the server as POST. The data is turned into JSON and the content type set to application/json so that the server knows what to convert when deserializing in the Album instance. The jQuery code hooks up success and failure events. Success returns the result data, which is a string that’s echoed back with an alert box. If an error occurs, jQuery returns the XHR instance and status code. You can check the XHR to see if a JSON object is embedded and if it is, you can extract it by de-serializing it and accessing the .message property. REST standards suggest that updates to existing resources should use PUT operations. REST standards aside, I’m not a big fan of separating out inserts and updates so I tend to have a single method that handles both. But if you want to follow REST suggestions, you can create a PUT method that handles updates by forwarding the PUT operation to the POST method:public HttpResponseMessage PutAlbum(Album album) { return PostAlbum(album); } To make the corresponding $.ajax() call, all you have to change from Listing 7 is the type: from POST to PUT. Model Binding with UrlEncoded POST Variables In the example in Listing 7 I used JSON objects to post a serialized object to a server method that accepted an strongly typed object with the same structure, which is a common way to send data to the server. However, Web API supports a number of different ways that data can be received by server methods. For example, another common way is to use plain UrlEncoded POST  values to send to the server. Web API supports Model Binding that works similar (but not the same) as MVC's model binding where POST variables are mapped to properties of object parameters of the target method. This is actually quite common for AJAX calls that want to avoid serialization and the potential requirement of a JSON parser on older browsers. For example, using jQUery you might use the $.post() method to send a new album to the server (albeit one without songs) using code like the following:$.post("albums/",{AlbumName: "Dirty Deeds", YearReleased: 1976 … },albumPostCallback); Although the code looks very similar to the client code we used before passing JSON, here the data passed is URL encoded values (AlbumName=Dirty+Deeds&YearReleased=1976 etc.). Web API then takes this POST data and maps each of the POST values to the properties of the Album object in the method's parameter. Although the client code is different the server can both handle the JSON object, or the UrlEncoded POST values. Dynamic Access to POST Data There are also a few options available to dynamically access POST data, if you know what type of data you're dealing with. If you have POST UrlEncoded values, you can dynamically using a FormsDataCollection:[HttpPost] public string PostAlbum(FormDataCollection form) { return string.Format("{0} - released {1}", form.Get("AlbumName"),form.Get("RearReleased")); } The FormDataCollection is a very simple object, that essentially provides the same functionality as Request.Form[] in ASP.NET. Request.Form[] still works if you're running hosted in an ASP.NET application. However as a general rule, while ASP.NET's functionality is always available when running Web API hosted inside of an  ASP.NET application, using the built in classes specific to Web API makes it possible to run Web API applications in a self hosted environment outside of ASP.NET. If your client is sending JSON to your server, and you don't want to map the JSON to a strongly typed object because you only want to retrieve a few simple values, you can also accept a JObject parameter in your API methods:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } There quite a few options available to you to receive data with Web API, which gives you more choices for the right tool for the job. Unfortunately one shortcoming of Web API is that POST data is always mapped to a single parameter. This means you can't pass multiple POST parameters to methods that receive POST data. It's possible to accept multiple parameters, but only one can map to the POST content - the others have to come from the query string or route values. I have a couple of Blog POSTs that explain what works and what doesn't here: Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API   Handling Delete Operations Finally, to round out the server API code of the album example we've been discussin, here’s the DELETE verb controller method that allows removal of an album by its title:public HttpResponseMessage DeleteAlbum(string title) { var matchedAlbum = AlbumData.Current.Where(alb => alb.AlbumName == title) .SingleOrDefault(); if (matchedAlbum == null) return new HttpResponseMessage(HttpStatusCode.NotFound); AlbumData.Current.Remove(matchedAlbum); return new HttpResponseMessage(HttpStatusCode.NoContent); } To call this action method using jQuery, you can use:$(".removeimage").live("click", function () { var $el = $(this).parent(".album"); var txt = $el.find("a").text(); $.ajax({ url: "albums/" + encodeURIComponent(txt), type: "Delete", success: function (result) { $el.fadeOut().remove(); }, error: jqError }); }   Note the use of the DELETE verb in the $.ajax() call, which routes to DeleteAlbum on the server. DELETE is a non-content operation, so you supply a resource ID (the title) via route value or the querystring. Routing Conflicts In all requests with the exception of the AlbumArt image example shown so far, I used HTTP Verb routing that I set up in Listing 1. HTTP Verb Routing is a recommendation that is in line with typical REST access to HTTP resources. However, it takes quite a bit of effort to create REST-compliant API implementations based only on HTTP Verb routing only. You saw one example that didn’t really fit – the return of an image where I created a custom route albums/{title}/image that required creation of a second controller and a custom route to work. HTTP Verb routing to a controller does not mix with custom or action routing to the same controller because of the limited mapping of HTTP verbs imposed by HTTP Verb routing. To understand some of the problems with verb routing, let’s look at another example. Let’s say you create a GetSortableAlbums() method like this and add it to the original AlbumApiController accessed via HTTP Verb routing:[HttpGet] public IQueryable<Album> SortableAlbums() { var albums = AlbumData.Current; // generally should be done only on actual queryable results (EF etc.) // Done here because we're running with a static list but otherwise might be slow return albums.AsQueryable(); } If you compile this code and try to now access the /albums/ link, you get an error: Multiple Actions were found that match the request. HTTP Verb routing only allows access to one GET operation per parameter/route value match. If more than one method exists with the same parameter signature, it doesn’t work. As I mentioned earlier for the image display, the only solution to get this method to work is to throw it into another controller. Because I already set up the AlbumRpcApiController I can add the method there. First, I should rename the method to SortableAlbums() so I’m not using a Get prefix for the method. This also makes the action parameter look cleaner in the URL - it looks less like a method and more like a noun. I can then create a new route that handles direct-action mapping:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); As I am explicitly adding a route segment – rpc – into the route template, I can now reference explicit methods in the Web API controller using URLs like this: http://localhost/AspNetWebApi/rpc/SortableAlbums Error Handling I’ve already done some minimal error handling in the examples. For example in Listing 6, I detected some known-error scenarios like model validation failing or a resource not being found and returning an appropriate HttpResponseMessage result. But what happens if your code just blows up or causes an exception? If you have a controller method, like this:[HttpGet] public void ThrowException() { throw new UnauthorizedAccessException("Unauthorized Access Sucka"); } You can call it with this: http://localhost/AspNetWebApi/albums/rpc/ThrowException The default exception handling displays a 500-status response with the serialized exception on the local computer only. When you connect from a remote computer, Web API throws back a 500  HTTP Error with no data returned (IIS then adds its HTML error page). The behavior is configurable in the GlobalConfiguration:GlobalConfiguration .Configuration .IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Never; If you want more control over your error responses sent from code, you can throw explicit error responses yourself using HttpResponseException. When you throw an HttpResponseException the response parameter is used to generate the output for the Controller action. [HttpGet] public void ThrowError() { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.BadRequest, new ApiMessageError("Your code stinks!")); throw new HttpResponseException(resp); } Throwing an HttpResponseException stops the processing of the controller method and immediately returns the response you passed to the exception. Unlike other Exceptions fired inside of WebAPI, HttpResponseException bypasses the Exception Filters installed and instead just outputs the response you provide. In this case, the serialized ApiMessageError result string is returned in the default serialization format – XML or JSON. You can pass any content to HttpResponseMessage, which includes creating your own exception objects and consistently returning error messages to the client. Here’s a small helper method on the controller that you might use to send exception info back to the client consistently:private void ThrowSafeException(string message, HttpStatusCode statusCode = HttpStatusCode.BadRequest) { var errResponse = Request.CreateResponse<ApiMessageError>(statusCode, new ApiMessageError() { message = message }); throw new HttpResponseException(errResponse); } You can then use it to output any captured errors from code:[HttpGet] public void ThrowErrorSafe() { try { List<string> list = null; list.Add("Rick"); } catch (Exception ex) { ThrowSafeException(ex.Message); } }   Exception Filters Another more global solution is to create an Exception Filter. Filters in Web API provide the ability to pre- and post-process controller method operations. An exception filter looks at all exceptions fired and then optionally creates an HttpResponseMessage result. Listing 8 shows an example of a basic Exception filter implementation.public class UnhandledExceptionFilter : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { HttpStatusCode status = HttpStatusCode.InternalServerError; var exType = context.Exception.GetType(); if (exType == typeof(UnauthorizedAccessException)) status = HttpStatusCode.Unauthorized; else if (exType == typeof(ArgumentException)) status = HttpStatusCode.NotFound; var apiError = new ApiMessageError() { message = context.Exception.Message }; // create a new response and attach our ApiError object // which now gets returned on ANY exception result var errorResponse = context.Request.CreateResponse<ApiMessageError>(status, apiError); context.Response = errorResponse; base.OnException(context); } } Exception Filter Attributes can be assigned to an ApiController class like this:[UnhandledExceptionFilter] public class AlbumRpcApiController : ApiController or you can globally assign it to all controllers by adding it to the HTTP Configuration's Filters collection:GlobalConfiguration.Configuration.Filters.Add(new UnhandledExceptionFilter()); The latter is a great way to get global error trapping so that all errors (short of hard IIS errors and explicit HttpResponseException errors) return a valid error response that includes error information in the form of a known-error object. Using a filter like this allows you to throw an exception as you normally would and have your filter create a response in the appropriate output format that the client expects. For example, an AJAX application can on failure expect to see a JSON error result that corresponds to the real error that occurred rather than a 500 error along with HTML error page that IIS throws up. You can even create some custom exceptions so you can differentiate your own exceptions from unhandled system exceptions - you often don't want to display error information from 'unknown' exceptions as they may contain sensitive system information or info that's not generally useful to users of your application/site. This is just one example of how ASP.NET Web API is configurable and extensible. Exception filters are just one example of how you can plug-in into the Web API request flow to modify output. Many more hooks exist and I’ll take a closer look at extensibility in Part 2 of this article in the future. Summary Web API is a big improvement over previous Microsoft REST and AJAX toolkits. The key features to its usefulness are its ease of use with simple controller based logic, familiar MVC-style routing, low configuration impact, extensibility at all levels and tight attention to exposing and making HTTP semantics easily discoverable and easy to use. Although none of the concepts used in Web API are new or radical, Web API combines the best of previous platforms into a single framework that’s highly functional, easy to work with, and extensible to boot. I think that Microsoft has hit a home run with Web API. Related Resources Where does ASP.NET Web API fit? Sample Source Code on GitHub Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API Creating a JSONP Formatter for ASP.NET Web API Removing the XML Formatter from ASP.NET Web API Applications© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • ASP.NET Web API - Screencast series Part 4: Paging and Querying

    - by Jon Galloway
    We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools. In Part 2 we started to build up a sample that returns data from a repository in JSON format via GET methods. In Part 3, we modified data on the server using DELETE and POST methods. In Part 4, we'll extend on our simple querying methods form Part 2, adding in support for paging and querying. This part shows two approaches to querying data (paging really just being a specific querying case) - you can do it yourself using parameters passed in via querystring (as well as headers, other route parameters, cookies, etc.). You're welcome to do that if you'd like. What I think is more interesting here is that Web API actions that return IQueryable automatically support OData query syntax, making it really easy to support some common query use cases like paging and filtering. A few important things to note: This is just support for OData query syntax - you're not getting back data in OData format. The screencast demonstrates this by showing the GET methods are continuing to return the same JSON they did previously. So you don't have to "buy in" to the whole OData thing, you're just able to use the query syntax if you'd like. This isn't full OData query support - full OData query syntax includes a lot of operations and features - but it is a pretty good subset: filter, orderby, skip, and top. All you have to do to enable this OData query syntax is return an IQueryable rather than an IEnumerable. Often, that could be as simple as using the AsQueryable() extension method on your IEnumerable. Query composition support lets you layer queries intelligently. If, for instance, you had an action that showed products by category using a query in your repository, you could also support paging on top of that. The result is an expression tree that's evaluated on-demand and includes both the Web API query and the underlying query. So with all those bullet points and big words, you'd think this would be hard to hook up. Nope, all I did was change the return type from IEnumerable<Comment> to IQueryable<Comment> and convert the Get() method's IEnumerable result using the .AsQueryable() extension method. public IQueryable<Comment> GetComments() { return repository.Get().AsQueryable(); } You still need to build up the query to provide the $top and $skip on the client, but you'd need to do that regardless. Here's how that looks: $(function () { //--------------------------------------------------------- // Using Queryable to page //--------------------------------------------------------- $("#getCommentsQueryable").click(function () { viewModel.comments([]); var pageSize = $('#pageSize').val(); var pageIndex = $('#pageIndex').val(); var url = "/api/comments?$top=" + pageSize + '&$skip=' + (pageIndex * pageSize); $.getJSON(url, function (data) { // Update the Knockout model (and thus the UI) with the comments received back // from the Web API call. viewModel.comments(data); }); return false; }); }); And the neat thing is that - without any modification to our server-side code - we can modify the above jQuery call to request the comments be sorted by author: $(function () { //--------------------------------------------------------- // Using Queryable to page //--------------------------------------------------------- $("#getCommentsQueryable").click(function () { viewModel.comments([]); var pageSize = $('#pageSize').val(); var pageIndex = $('#pageIndex').val(); var url = "/api/comments?$top=" + pageSize + '&$skip=' + (pageIndex * pageSize) + '&$orderby=Author'; $.getJSON(url, function (data) { // Update the Knockout model (and thus the UI) with the comments received back // from the Web API call. viewModel.comments(data); }); return false; }); }); So if you want to make use of OData query syntax, you can. If you don't like it, you're free to hook up your filtering and paging however you think is best. Neat. In Part 5, we'll add on support for Data Annotation based validation using an Action Filter.

    Read the article

  • A ToDynamic() Extension Method For Fluent Reflection

    - by Dixin
    Recently I needed to demonstrate some code with reflection, but I felt it inconvenient and tedious. To simplify the reflection coding, I created a ToDynamic() extension method. The source code can be downloaded from here. Problem One example for complex reflection is in LINQ to SQL. The DataContext class has a property Privider, and this Provider has an Execute() method, which executes the query expression and returns the result. Assume this Execute() needs to be invoked to query SQL Server database, then the following code will be expected: using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // Executes the query. Here reflection is required, // because Provider, Execute(), and ReturnValue are not public members. IEnumerable<Product> results = database.Provider.Execute(query.Expression).ReturnValue; // Processes the results. foreach (Product product in results) { Console.WriteLine("{0}, {1}", product.ProductID, product.ProductName); } } Of course, this code cannot compile. And, no one wants to write code like this. Again, this is just an example of complex reflection. using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // database.Provider PropertyInfo providerProperty = database.GetType().GetProperty( "Provider", BindingFlags.NonPublic | BindingFlags.GetProperty | BindingFlags.Instance); object provider = providerProperty.GetValue(database, null); // database.Provider.Execute(query.Expression) // Here GetMethod() cannot be directly used, // because Execute() is a explicitly implemented interface method. Assembly assembly = Assembly.Load("System.Data.Linq"); Type providerType = assembly.GetTypes().SingleOrDefault( type => type.FullName == "System.Data.Linq.Provider.IProvider"); InterfaceMapping mapping = provider.GetType().GetInterfaceMap(providerType); MethodInfo executeMethod = mapping.InterfaceMethods.Single(method => method.Name == "Execute"); IExecuteResult executeResult = executeMethod.Invoke(provider, new object[] { query.Expression }) as IExecuteResult; // database.Provider.Execute(query.Expression).ReturnValue IEnumerable<Product> results = executeResult.ReturnValue as IEnumerable<Product>; // Processes the results. foreach (Product product in results) { Console.WriteLine("{0}, {1}", product.ProductID, product.ProductName); } } This may be not straight forward enough. So here a solution will implement fluent reflection with a ToDynamic() extension method: IEnumerable<Product> results = database.ToDynamic() // Starts fluent reflection. .Provider.Execute(query.Expression).ReturnValue; C# 4.0 dynamic In this kind of scenarios, it is easy to have dynamic in mind, which enables developer to write whatever code after a dot: using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // database.Provider dynamic dynamicDatabase = database; dynamic results = dynamicDatabase.Provider.Execute(query).ReturnValue; } This throws a RuntimeBinderException at runtime: 'System.Data.Linq.DataContext.Provider' is inaccessible due to its protection level. Here dynamic is able find the specified member. So the next thing is just writing some custom code to access the found member. .NET 4.0 DynamicObject, and DynamicWrapper<T> Where to put the custom code for dynamic? The answer is DynamicObject’s derived class. I first heard of DynamicObject from Anders Hejlsberg's video in PDC2008. It is very powerful, providing useful virtual methods to be overridden, like: TryGetMember() TrySetMember() TryInvokeMember() etc.  (In 2008 they are called GetMember, SetMember, etc., with different signature.) For example, if dynamicDatabase is a DynamicObject, then the following code: dynamicDatabase.Provider will invoke dynamicDatabase.TryGetMember() to do the actual work, where custom code can be put into. Now create a type to inherit DynamicObject: public class DynamicWrapper<T> : DynamicObject { private readonly bool _isValueType; private readonly Type _type; private T _value; // Not readonly, for value type scenarios. public DynamicWrapper(ref T value) // Uses ref in case of value type. { if (value == null) { throw new ArgumentNullException("value"); } this._value = value; this._type = value.GetType(); this._isValueType = this._type.IsValueType; } public override bool TryGetMember(GetMemberBinder binder, out object result) { // Searches in current type's public and non-public properties. PropertyInfo property = this._type.GetTypeProperty(binder.Name); if (property != null) { result = property.GetValue(this._value, null).ToDynamic(); return true; } // Searches in explicitly implemented properties for interface. MethodInfo method = this._type.GetInterfaceMethod(string.Concat("get_", binder.Name), null); if (method != null) { result = method.Invoke(this._value, null).ToDynamic(); return true; } // Searches in current type's public and non-public fields. FieldInfo field = this._type.GetTypeField(binder.Name); if (field != null) { result = field.GetValue(this._value).ToDynamic(); return true; } // Searches in base type's public and non-public properties. property = this._type.GetBaseProperty(binder.Name); if (property != null) { result = property.GetValue(this._value, null).ToDynamic(); return true; } // Searches in base type's public and non-public fields. field = this._type.GetBaseField(binder.Name); if (field != null) { result = field.GetValue(this._value).ToDynamic(); return true; } // The specified member is not found. result = null; return false; } // Other overridden methods are not listed. } In the above code, GetTypeProperty(), GetInterfaceMethod(), GetTypeField(), GetBaseProperty(), and GetBaseField() are extension methods for Type class. For example: internal static class TypeExtensions { internal static FieldInfo GetBaseField(this Type type, string name) { Type @base = type.BaseType; if (@base == null) { return null; } return @base.GetTypeField(name) ?? @base.GetBaseField(name); } internal static PropertyInfo GetBaseProperty(this Type type, string name) { Type @base = type.BaseType; if (@base == null) { return null; } return @base.GetTypeProperty(name) ?? @base.GetBaseProperty(name); } internal static MethodInfo GetInterfaceMethod(this Type type, string name, params object[] args) { return type.GetInterfaces().Select(type.GetInterfaceMap).SelectMany(mapping => mapping.TargetMethods) .FirstOrDefault( method => method.Name.Split('.').Last().Equals(name, StringComparison.Ordinal) && method.GetParameters().Count() == args.Length && method.GetParameters().Select( (parameter, index) => parameter.ParameterType.IsAssignableFrom(args[index].GetType())).Aggregate( true, (a, b) => a && b)); } internal static FieldInfo GetTypeField(this Type type, string name) { return type.GetFields( BindingFlags.GetField | BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.NonPublic).FirstOrDefault( field => field.Name.Equals(name, StringComparison.Ordinal)); } internal static PropertyInfo GetTypeProperty(this Type type, string name) { return type.GetProperties( BindingFlags.GetProperty | BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.NonPublic).FirstOrDefault( property => property.Name.Equals(name, StringComparison.Ordinal)); } // Other extension methods are not listed. } So now, when invoked, TryGetMember() searches the specified member and invoke it. The code can be written like this: dynamic dynamicDatabase = new DynamicWrapper<NorthwindDataContext>(ref database); dynamic dynamicReturnValue = dynamicDatabase.Provider.Execute(query.Expression).ReturnValue; This greatly simplified reflection. ToDynamic() and fluent reflection To make it even more straight forward, A ToDynamic() method is provided: public static class DynamicWrapperExtensions { public static dynamic ToDynamic<T>(this T value) { return new DynamicWrapper<T>(ref value); } } and a ToStatic() method is provided to unwrap the value: public class DynamicWrapper<T> : DynamicObject { public T ToStatic() { return this._value; } } In the above TryGetMember() method, please notice it does not output the member’s value, but output a wrapped member value (that is, memberValue.ToDynamic()). This is very important to make the reflection fluent. Now the code becomes: IEnumerable<Product> results = database.ToDynamic() // Here starts fluent reflection. .Provider.Execute(query.Expression).ReturnValue .ToStatic(); // Unwraps to get the static value. With the help of TryConvert(): public class DynamicWrapper<T> : DynamicObject { public override bool TryConvert(ConvertBinder binder, out object result) { result = this._value; return true; } } ToStatic() can be omitted: IEnumerable<Product> results = database.ToDynamic() .Provider.Execute(query.Expression).ReturnValue; // Automatically converts to expected static value. Take a look at the reflection code at the beginning of this post again. Now it is much much simplified! Special scenarios In 90% of the scenarios ToDynamic() is enough. But there are some special scenarios. Access static members Using extension method ToDynamic() for accessing static members does not make sense. Instead, DynamicWrapper<T> has a parameterless constructor to handle these scenarios: public class DynamicWrapper<T> : DynamicObject { public DynamicWrapper() // For static. { this._type = typeof(T); this._isValueType = this._type.IsValueType; } } The reflection code should be like this: dynamic wrapper = new DynamicWrapper<StaticClass>(); int value = wrapper._value; int result = wrapper.PrivateMethod(); So accessing static member is also simple, and fluent of course. Change instances of value types Value type is much more complex. The main problem is, value type is copied when passing to a method as a parameter. This is why ref keyword is used for the constructor. That is, if a value type instance is passed to DynamicWrapper<T>, the instance itself will be stored in this._value of DynamicWrapper<T>. Without the ref keyword, when this._value is changed, the value type instance itself does not change. Consider FieldInfo.SetValue(). In the value type scenarios, invoking FieldInfo.SetValue(this._value, value) does not change this._value, because it changes the copy of this._value. I searched the Web and found a solution for setting the value of field: internal static class FieldInfoExtensions { internal static void SetValue<T>(this FieldInfo field, ref T obj, object value) { if (typeof(T).IsValueType) { field.SetValueDirect(__makeref(obj), value); // For value type. } else { field.SetValue(obj, value); // For reference type. } } } Here __makeref is a undocumented keyword of C#. But method invocation has problem. This is the source code of TryInvokeMember(): public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { if (binder == null) { throw new ArgumentNullException("binder"); } MethodInfo method = this._type.GetTypeMethod(binder.Name, args) ?? this._type.GetInterfaceMethod(binder.Name, args) ?? this._type.GetBaseMethod(binder.Name, args); if (method != null) { // Oops! // If the returnValue is a struct, it is copied to heap. object resultValue = method.Invoke(this._value, args); // And result is a wrapper of that copied struct. result = new DynamicWrapper<object>(ref resultValue); return true; } result = null; return false; } If the returned value is of value type, it will definitely copied, because MethodInfo.Invoke() does return object. If changing the value of the result, the copied struct is changed instead of the original struct. And so is the property and index accessing. They are both actually method invocation. For less confusion, setting property and index are not allowed on struct. Conclusions The DynamicWrapper<T> provides a simplified solution for reflection programming. It works for normal classes (reference types), accessing both instance and static members. In most of the scenarios, just remember to invoke ToDynamic() method, and access whatever you want: StaticType result = someValue.ToDynamic()._field.Method().Property[index]; In some special scenarios which requires changing the value of a struct (value type), this DynamicWrapper<T> does not work perfectly. Only changing struct’s field value is supported. The source code can be downloaded from here, including a few unit test code.

    Read the article

  • Using Subjects to Deploy Queries Dynamically

    - by Roman Schindlauer
    In the previous blog posting, we showed how to construct and deploy query fragments to a StreamInsight server, and how to re-use them later. In today’s posting we’ll integrate this pattern into a method of dynamically composing a new query with an existing one. The construct that enables this scenario in StreamInsight V2.1 is a Subject. A Subject lets me create a junction element in an existing query that I can tap into while the query is running. To set this up as an end-to-end example, let’s first define a stream simulator as our data source: var generator = myApp.DefineObservable(     (TimeSpan t) => Observable.Interval(t).Select(_ => new SourcePayload())); This ‘generator’ produces a new instance of SourcePayload with a period of t (system time) as an IObservable. SourcePayload happens to have a property of type double as its payload data. Let’s also define a sink for our example—an IObserver of double values that writes to the console: var console = myApp.DefineObserver(     (string label) => Observer.Create<double>(e => Console.WriteLine("{0}: {1}", label, e)))     .Deploy("ConsoleSink"); The observer takes a string as parameter which is used as a label on the console, so that we can distinguish the output of different sink instances. Note that we also deploy this observer, so that we can retrieve it later from the server from a different process. Remember how we defined the aggregation as an IQStreamable function in the previous article? We will use that as well: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value))     .Deploy("AverageQuery"); Then we define the Subject, which acts as an observable sequence as well as an observer. Thus, we can feed a single source into the Subject and have multiple consumers—that can come and go at runtime—on the other side: var subject = myApp.CreateSubject("Subject", () => new Subject<SourcePayload>()); Subject are always deployed automatically. Their name is used to retrieve them from a (potentially) different process (see below). Note that the Subject as we defined it here doesn’t know anything about temporal streams. It is merely a sequence of SourcePayloads, without any notion of StreamInsight point events or CTIs. So in order to compose a temporal query on top of the Subject, we need to 'promote' the sequence of SourcePayloads into an IQStreamable of point events, including CTIs: var stream = subject.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); In a later posting we will show how to use Subjects that have more awareness of time and can be used as a junction between QStreamables instead of IQbservables. Having turned the Subject into a temporal stream, we can now define the aggregate on this stream. We will use the IQStreamable entity avg that we defined above: var longAverages = avg(stream, TimeSpan.FromSeconds(5)); In order to run the query, we need to bind it to a sink, and bind the subject to the source: var standardQuery = longAverages     .Bind(console("5sec average"))     .With(generator(TimeSpan.FromMilliseconds(300)).Bind(subject)); Lastly, we start the process: standardQuery.Run("StandardProcess"); Now we have a simple query running end-to-end, producing results. What follows next is the crucial part of tapping into the Subject and adding another query that runs in parallel, using the same query definition (the “AverageQuery”) but with a different window length. We are assuming that we connected to the same StreamInsight server from a different process or even client, and thus have to retrieve the previously deployed entities through their names: // simulate the addition of a 'fast' query from a separate server connection, // by retrieving the aggregation query fragment // (instead of simply using the 'avg' object) var averageQuery = myApp     .GetStreamable<IQStreamable<SourcePayload>, TimeSpan, double>("AverageQuery"); // retrieve the input sequence as a subject var inputSequence = myApp     .GetSubject<SourcePayload, SourcePayload>("Subject"); // retrieve the registered sink var sink = myApp.GetObserver<string, double>("ConsoleSink"); // turn the sequence into a temporal stream var stream2 = inputSequence.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); // apply the query, now with a different window length var shortAverages = averageQuery(stream2, TimeSpan.FromSeconds(1)); // bind new sink to query and run it var fastQuery = shortAverages     .Bind(sink("1sec average"))     .Run("FastProcess"); The attached solution demonstrates the sample end-to-end. Regards, The StreamInsight Team

    Read the article

  • Interface Builder is unable to open documents of type iPad XIB.

    - by sagar
    After installing SDK 3.2 Beta 5 on your MAC. Please follow this steps to understand my problem. Start XCode. Click on Help Menu - Select Developer Documentation from the toolbar of window - Click on Home - & select iPhone OS 3.2 Library On the left side of screen - you can see Cocoa Touch Layer Category under Frameworks Select UIKit from it. On right side you will have - ToolbarSeach - as an first one link on it. Click on it. After clicking on it - You will see an option of "Open Project in xCode" on the title. Click on it & save the project to open it. Now, Click on run to execute this sample code. After compilation - it will give you two errors. something like this - "Interface Builder is unable to open documents of type iPad XIB." I don't know why this error is disturbing me? What should be solution to resolve it? Sagar.

    Read the article

  • Flex through Flash Builder 4; Connecting to a dynamic XML feed: "The response is not a valid XML or

    - by jtromans
    I am learning how to use Flex with Adobe Flash Builder 4 standalone. I am working my through the Adobe Flash Build 4 Bible by David Gassner. This has led me to create my own micro problems to try and solve. I am trying to connect to a dynamix XML feed created by the following aspx page: generate_xml.aspx When I create the data connection through the Data/Service panel, I can pick between XML and HTTP. I figured because the generate_xml.aspx has to generate the XML file first, I should use the HTTP service as opposed to the XML. The HTTP service offers GET, which seems to be the kinda thing I want. However, I am really struggling to do this. I keep getting: "The response is not a valid XML or a JSON string" The actual STATIC generated XML file that is created by this page works perfectly when I save it and manually connect with the XML service. Therefore I know my XML code is properly formatted and contains no other HTML of JavaScript. I figure my problem occurs because the page itself is .aspx, but I cannot work out how to successfully ask Flex to request the output of this page, rather than the page itself.

    Read the article

  • How do I use XML Builder to create xml in rails?

    - by Angela
    I am trying the following, but the output XML is badly formed: def submit xml = Builder::XmlMarkup.new xml.instruct! xml.mail xml.documents xml.document xml.template xml.source "gallery" xml.name "Postcard: Image fill front" @xml_display = xml end I need it to look more like this: <?xml version="1.0" encoding="UTF-8"?> <mail> <documents> <document> <template> <source>gallery</source> <name>Postcard: Image fill front</name> </template> <sections> <section> <name>Text</name> <text>Hello, World!</text> </section> <section> <name>Image</name> <attachment>...attachment id...</attachment> </section> </sections> </document> </documents> <addressees> <addressee> <name>John Doe</name> <address>123 Main St</address> <city>Anytown</city> <state>AZ</state> <postal-code>10000</postal-code> </addressee> </addressees> </mail>

    Read the article

  • How to add namespaces to a flex AIR project in Flash Builder 4?

    - by milkplus
    In my ant build.xml script I have... <namespace uri="http://ns.foo.com/mxml/2011" manifest="src/manifest.xml"/> <namespace uri="library://ns.adobe.com/flex/spark" manifest="flex_src/spark-manifest.xml"/> <namespace uri="http://www.adobe.com/2006/mxml" manifest="flex_src/mx-manifest.xml"/> That works! But... I'm not sure how to add these namespaces to my project properties in Flash Builder 4 so I can debug. When I try, it changes this line in my .actionScriptProperties <compiler additionalCompilerArguments="-namespace http://ns.foo.com/mxml/2011 src/manifest.xml -namespace=library://ns.adobe.com/flex/spark flex_src/spark-manifest.xml -namespace http://www.adobe.com/2006/mxml flex_src/mx-manifest.xml" autoRSLOrdering="true" copyDependentFiles="true" fteInMXComponents="false" generateAccessible="true" htmlExpressInstall="true" htmlGenerate="false" htmlHistoryManagement="false" htmlPlayerVersionCheck="true" includeNetmonSwc="false" outputFolderPath="bin-debug" sourceFolderPath="src" strict="true" targetPlayerVersion="0.0.0" useApolloConfig="true" useDebugRSLSwfs="true" verifyDigests="true" warn="true"> but gives me a "no default arguments are expected" error. What is the reason for this error? The error location is "Unknown" and seems to refer to these compiler arguments.

    Read the article

  • iPhone: custom UITableViewCell with Interface Builder -> how to release cell objects?

    - by Stefan Klumpp
    The official documentation tells me I've to do these 3 things in order to manage the my memory for "nib objects" correctly. @property (nonatomic, retain) IBOutlet UIUserInterfaceElementClass *anOutlet; "You should then either synthesize the corresponding accessor methods, or implement them according to the declaration, and (in iPhone OS) release the corresponding variable in dealloc." - (void)viewDidUnload { self.anOutlet = nil; [super viewDidUnload]; } That makes sense for a normal view. However, how am I gonna do that for a UITableView with custom UITableViewCells loaded through a .nib-file? There the IBOutlets are in MyCustomCell.h (inherited from UITableViewCell), but that is not the place where I load the nib and apply it to the cell instances, because that happens in MyTableView.m So do I still release the IBOutlets in the dealloc of MyCustomCell.m or do I have to do something in MyTableView.m? Also MyCustomCell.m doesn't have a - (void)viewDidUnload {} where I can set my IBOutlets to nil, while my MyTableView.m does.

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Querying the SSIS Catalog? Here’s a handy query!

    - by jamiet
    I’ve been working on a SQL Server Integration Services (SSIS) solution for about 6 months now and I’ve learnt many many things that I intend to share on this blog just as soon as I get the time. Here’s a very short starter-for-ten… I’ve found the following query to be utterly invaluable when interrogating the SSIS Catalog to discover what is going on in my executions: SELECT event_message_id,MESSAGE,package_name,event_name,message_source_name,package_path,execution_path,message_type,message_source_typeFROM   (       SELECT  em.*       FROM    SSISDB.catalog.event_messages em       WHERE   em.operation_id = (SELECT MAX(execution_id) FROM SSISDB.catalog.executions)           AND event_name NOT LIKE '%Validate%'       )q/* Put in whatever WHERE predicates you might like*/--WHERE event_name = 'OnError'--WHERE package_name = 'Package.dtsx'--WHERE execution_path LIKE '%<some executable>%'ORDER BY message_time DESC Know it. Learn it. Love it. @jamiet

    Read the article

  • libreoffice-base not configured yet

    - by Wicky
    I have the LibreOffice ppa installed (ppa:libreoffice/ppa) and today I had a problem after updating. I got the following error. Reading package lists ... Done Building dependency tree Reading state information ... Ready You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libreoffice-base: Depends: libreoffice-base-core (= 1: 4.3.0-0ubuntu1 ~ precise1) but 4.3.0-3ubuntu1 ~ precise1 is installed Depends: libreoffice-base-drivers (= 1: 4.3.0-0ubuntu1 ~ precise1) but 4.3.0-3ubuntu1 ~ precise1 is installed Depends: libreoffice-core (= 1: 4.3.0-0ubuntu1 ~ precise1) but 4.3.0-3ubuntu1 ~ precise1 is installed libreoffice-core: Breaks: libreoffice-base (<1: ~ 4.3.0-3ubuntu1 precise1) but 4.3.0-0ubuntu1 ~ precise1 is installed E: Unmet dependencies. Try to use -f. After trying sudo apt-get install -f I got the following output Pakketlijsten worden ingelezen... Klaar Boom van vereisten wordt opgebouwd De status informatie wordt gelezen... Klaar Vereisten worden gecorrigeerd... Klaar De volgende extra pakketten zullen geïnstalleerd worden: libreoffice-base Voorgestelde pakketten: libreoffice-gcj libreoffice-report-builder unixodbc De volgende pakketten zullen opgewaardeerd worden: libreoffice-base 1 pakketten opgewaardeerd, 0 pakketten nieuw geïnstalleerd, 0 te verwijderen en 0 niet opgewaardeerd. 3 pakketten niet volledig geïnstalleerd of verwijderd. Er moeten 0 B/2170 kB aan archieven opgehaald worden. Door deze operatie zal er 2841 kB extra schijfruimte gebruikt worden. Wilt u doorgaan [J/n]? dpkg: vereistenproblemen verhinderen de configuratie van libreoffice-base: libreoffice-base is afhankelijk van libreoffice-base-core (= 1:4.3.0-0ubuntu1~precise1); maar: Versie van libreoffice-base-core op dit systeem is 1:4.3.0-3ubuntu1~precise1. libreoffice-base is afhankelijk van libreoffice-base-drivers (= 1:4.3.0-0ubuntu1~precise1); maar: Versie van libreoffice-base-drivers op dit systeem is 1:4.3.0-3ubuntu1~precise1. libreoffice-base is afhankelijk van libreoffice-core (= 1:4.3.0-0ubuntu1~precise1); maar: Versie van libreoffice-core op dit systeem is 1:4.3.0-3ubuntu1~precise1. libreoffice-core (1:4.3.0-3ubuntu1~precise1) breaks libreoffice-base (<< 1:4.3.0-3ubuntu1~precise1) and is geïnstalleerd. Version of libreoffice-base to be configured is 1:4.3.0-0ubuntu1~precise1. dpkg: fout bij afhandelen van libreoffice-base (--configure): vereistenproblemen - blijft ongeconfigureerd dpkg: vereistenproblemen verhinderen de configuratie van libreoffice-report-builder-bin: libreoffice-report-builder-bin is afhankelijk van libreoffice-base; maar:Er is geen apport-verslag weggeschreven omdat de foutmelding volgt op een eerdere mislukking. Pakket libreoffice-base is nog niet geconfigureerd. dpkg: fout bij afhandelen van libreoffice-report-builder-bin (--configure): vereistenproblemen - blijft ongeconfigureerd dpkg: vereistenproblemen verhinderen de configuratie van libreoffice: libreoffice is afhankelijk van libreoffice-base; maar: Pakket libreoffice-base is nog niet geconfigureerd. libreoffice is afhankelijk van libreoffice-report-builder-bin; maar: Pakket libreoffice-report-builder-bin is nog niet geconfigureerd. dpkg: fout bij afhandelen van libreoffice (--configure): vereistenproblemen - blijft ongeconfigureerd Er is geen apport-verslag weggeschreven omdat de foutmelding volgt op een eerdere mislukking. Er is geen apport-verslag weggeschreven omdat de foutmelding volgt op een eerdere mislukking. Fouten gevonden tijdens behandelen van: libreoffice-base libreoffice-report-builder-bin libreoffice E: Sub-process /usr/bin/dpkg returned an error code (1) How can I solve this problem so the dependencies are solved? Do I have to configure libreoffice-base manually?

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >