Search Results

Search found 22318 results on 893 pages for 'mike post'.

Page 132/893 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Rails SQL injection?

    - by yuval
    In Rails, when I want to find by a user given value and avoid SQL injection (escape apostrophes and the like) I can do something like this: Post.all(:conditions => ['title = ?', params[:title]]) I know that an unsafe way of doing this (possible SQL injection) is this: Post.all(:conditions => "title = #{params[:title]}") My question is, does the following method prevent SQL injection or not? Post.all(:conditions => {:title => params[:title]})

    Read the article

  • getting active records to display as a plist

    - by phil swenson
    I'm trying to get a list of active record results to display as a plist for being consumed by the iphone. I'm using the plist gem v 3.0. My model is called Post. And I want Post.all (or any array or Posts) to display correctly as a Plist. I have it working fine for one Post instance: [http://pastie.org/580902][1] that is correct, what I would expect. To get that behavior I had to do this: class Post < ActiveRecord::Base def to_plist attributes.to_plist end end However, when I do a Post.all, I can't get it to display what I want. Here is what happens: http://pastie.org/580909 I get marshalling. I want output more like this: [http://pastie.org/580914][2] I suppose I could just iterate the result set and append the plist strings. But seems ugly, I'm sure there is a more elegant way to do this. I am rusty on Ruby right now, so the elegant way isn't obvious to me. Seems like I should be able to override ActiveRecord and make result-sets that pull back more than one record take the ActiveRecord::Base to_plist and make another to_plist implementation. In rails, this would go in environment.rb, right?

    Read the article

  • Why won't these two mod_rewrite rules work together?

    - by George Edison
    Here is what I have: <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^users/(\d+)/post$ post.php [L] RewriteRule ^users/(\d+)$ user.php?id=$1 [L] The first rule doesn't work. The second one does. All I get when I enter .../users/1/post is a 404 error. What am I doing wrong? Edit: The error log doesn't have anything in it relating to this.

    Read the article

  • ASP.NET MVC - how to get the value from a textbox in my View?

    - by fearofawhackplanet
    If I have a textbox in my view: <div><%= Html.TextBox("Comments", Model.Comments)%></div> I want to post the contents of this textbox to the controller with an Ajax call. I only need this one value though, so I don't want to post the whole form back. <%= Ajax.ActionLink("update", "UpdateComments", new { comments = /* ????? */ }, new AjaxOptions { HttpMethod="POST" })%> How do I get the textbox value?

    Read the article

  • Joining tables, if percentage is above certain value

    - by CluelessGerman
    My question is similar to this one: Compare rows and get percentage However, little different. I adapted my question to the other post. I got 2 tables. First table: user_id | post_id 1 1 1 2 1 3 2 12 2 15 And second table: post_id | rating 1 1 1 2 1 3 2 1 2 5 3 null 3 1 3 4 12 4 15 1 So now I would like to count the rating for each post, in the second table. If the rating has more than, lets say, 50% positive ratings than I want to get the post_id and going it to the post_id from table one and add 1 to the user_id. At the end it would return the user_id with the number of positive posts. The result for above table would be: user_id | helpfulPosts 1 2 2 1 The post with post_id 1 and 3 have positive rating, because more than 50% have ratings of 1-3. The post with id = 2 is not positive, because the rating is exactly 50%. How would I achieve this? For clarification: It's a mysql rdbm and a positive post, is one where the number of rating_ids with 1, 2 and 3 are more than half of the overall rating. Basically the same thing, from the other thread I posted above.

    Read the article

  • File upload with tdo-miniforms not working

    - by user338109
    I am using the TDO-miniforms plugin for wordpress. I have a form set up that lets the user submit files. The files are successfully uploaded to the tmp-folder, but once the post is created they are not copied into the allocated post folder. The only thing that is created in the folder with a post id is a file called array. It seems like the file holds the binary data of the uploaded files. The “correct” urls are appended to the post content I am running wordpress 2.9.2 on snow leopard using Mamp.

    Read the article

  • How to redirect with .htaccess (keeping legacy links)

    - by Laurent
    Hello, I recently switched CMSes. While using Wordpress, I had this permalink convention: "/year/post". Now, I'd like to have "year/month/post". To keep legacy links, I need to redirect from "http://site.com/2009/sample-post" to "http://site.com/2009/01/sample-post". "01" should be permanent in this case. This is what I've got atm: RewriteEngine on RewriteCond $1 !^(images|system|themes|_|wp-content|mint|assets|favicon\.ico|robots\.txt|index\.php) [NC] RewriteRule ^(.*)$ /index.php?/$1 [L] Thanks in advance!

    Read the article

  • Array structure returned by Yii's model

    - by user1104955
    I am a Yii beginner and am running into a bit of a wall and hope someone will be able to help me get back onto track. I think this might be a fairly straight forward question to the seasoned Yii user. So here goes... In the controller, let's say I run the following call to the model- $variable = Post::model()->findAll(); All works fine and I pass the variable into the view. Here's where I get pretty stuck. The array that is returned in the above query is far more complex than I anticipated and I'm struggling to make sense of it. Here's a sample- print_r($variable); gives- Array ( [0] => Post Object ( [_md:CActiveRecord:private] => CActiveRecordMetaData Object ( [tableSchema] => CMysqlTableSchema Object ( [schemaName] => [name] => tbl_post [rawName] => `tbl_post` [primaryKey] => id [sequenceName] => [foreignKeys] => Array ( ) [columns] => Array ( [id] => CMysqlColumnSchema Object ( [name] => id [rawName] => `id` [allowNull] => [dbType] => int(11) [type] => integer [defaultValue] => [size] => 11 [precision] => 11 [scale] => [isPrimaryKey] => 1 [isForeignKey] => [autoIncrement] => 1 [_e:CComponent:private] => [_m:CComponent:private] => ) [post] => CMysqlColumnSchema Object ( [name] => post [rawName] => `post` [allowNull] => [dbType] => text [type] => string [defaultValue] => [size] => [precision] => [scale] => [isPrimaryKey] => [isForeignKey] => [autoIncrement] => [_e:CComponent:private] => [_m:CComponent:private] => ) ) [_e:CComponent:private] => [_m:CComponent:private] => ) [columns] => Array ( [id] => CMysqlColumnSchema Object ( [name] => id [rawName] => `id` [allowNull] => [dbType] => int(11) [type] => integer [defaultValue] => [size] => 11 [precision] => 11 [scale] => [isPrimaryKey] => 1 [isForeignKey] => [autoIncrement] => 1 [_e:CComponent:private] => [_m:CComponent:private] => ) [post] => CMysqlColumnSchema Object ( [name] => post [rawName] => `post` [allowNull] => [dbType] => text [type] => string [defaultValue] => [size] => [precision] => [scale] => [isPrimaryKey] => [isForeignKey] => [autoIncrement] => [_e:CComponent:private] => [_m:CComponent:private] => ) ) [relations] => Array ( [responses] => CHasManyRelation Object ( [limit] => -1 [offset] => -1 [index] => [through] => [joinType] => LEFT OUTER JOIN [on] => [alias] => [with] => Array ( ) [together] => [scopes] => [name] => responses [className] => Response [foreignKey] => post_id [select] => * [condition] => [params] => Array ( ) [group] => [join] => [having] => [order] => [_e:CComponent:private] => [_m:CComponent:private] => ) ) [attributeDefaults] => Array ( ) [_model:CActiveRecordMetaData:private] => Post Object ( [_md:CActiveRecord:private] => CActiveRecordMetaData Object *RECURSION* [_new:CActiveRecord:private] => [_attributes:CActiveRecord:private] => Array ( ) [_related:CActiveRecord:private] => Array ( ) [_c:CActiveRecord:private] => [_pk:CActiveRecord:private] => [_alias:CActiveRecord:private] => t [_errors:CModel:private] => Array ( ) [_validators:CModel:private] => [_scenario:CModel:private] => [_e:CComponent:private] => [_m:CComponent:private] => ) ) [_new:CActiveRecord:private] => [_attributes:CActiveRecord:private] => Array ( [id] => 1 [post] => User Post ) [_related:CActiveRecord:private] => Array ( ) [_c:CActiveRecord:private] => [_pk:CActiveRecord:private] => 1 [_alias:CActiveRecord:private] => t [_errors:CModel:private] => Array ( ) [_validators:CModel:private] => [_scenario:CModel:private] => update [_e:CComponent:private] => [_m:CComponent:private] => ) ) [sorry if there's an easier way to show this array, I'm not aware of it] Can anyone explain to me why the model returns such a complex array? It doesn't seem to matter what tables or columns or relations are used in your application, they all seem to me to return this format. Also, can someone explain the structure to me so that I can isolate the variables that I want to recover? Many thanks in advance, Nick

    Read the article

  • how to get the content of iframe in a php variable? [closed]

    - by Sahil
    My code is somewhat like this: <?php if($_REQUEST['post']) { $title=$_REQUEST['title']; $body=$_REQUEST['body']; echo $title.$body; } ?> <script type="text/javascript" src="texteditor.js"> </script> <form action="" method="post"> Title: <input type="text" name="title"/><br> <a id="bold" class="font-bold"> B </a> <a id="italic" class="italic"> I </a> Post: <iframe id="textEditor" name="body"></iframe> <input type="submit" name="post" value="Post" /> </form> the texteditor.js file code is: $(document).ready(function(){ document.getElementById('textEditor').contentWindow.document.designMode="on"; document.getElementById('textEditor').contentWindow.document.close(); $("#bold").click(function(){ if($(this).hasClass("selected")) { $(this).removeClass("selected"); }else { $(this).addClass("selected"); } boldIt(); }); $("#italic").click(function(){ if($(this).hasClass("selected")) { $(this).removeClass("selected"); }else { $(this).addClass("selected"); } ItalicIt(); }); }); function boldIt(){ var edit = document.getElementById("textEditor").contentWindow; edit.focus(); edit.document.execCommand("bold", false, ""); edit.focus(); } function ItalicIt(){ var edit = document.getElementById("textEditor").contentWindow; edit.focus(); edit.document.execCommand("italic", false, ""); edit.focus(); } function post(){ var iframe = document.getElementById("body").contentWindow; } actualy i want to fetch data from this texteditor (which is created using iframe and javascript) and store it in some other place. i'm not able to fetch the content that is entered in the editor (i.e. iframe here). please help me out of this....

    Read the article

  • Will the error be displayed?

    - by user281180
    I have an ajax post and in the controller I return nothing. In case there is a failure will the error message displayed with the follwoing code? [AcceptVerbs(HttpVerbs.Post)] public void Edit(Model model) { model.Save(); } $.ajax({ type: "POST", url: '<%=Url.Action("Edit","test") %>', data: JSON.stringify(data), contentType: "application/json; charset=utf-8", dataType: "html", success: function() { }, error: function(request, status, error) { alert("Error: " & request.responseText); } });

    Read the article

  • Ajax posting to PHP

    - by JQonfused
    Hi guys, I'm testing a jQuery ajax post method on a local Apache 2.2 server with PHP 5.3 (totally new at this). Here are the files, all in the same folder. html body (jQuery library included in head): <form id="postForm" method="post"> <label for="name">Input Name</label> <input type="text" name="name" id="name" /><br /> <label for="age">Input Age</label> <input type="text" name="age" id="age" /><br /> <input type="submit" value="Submit" id="submitBtn" /> </form> <div id="resultDisplay"></div> <script src="queryRequest.js"></script> queryRequest.js $(document).ready(function(){ $('#s').focus(); $('#postForm').submit(function(){ var name = $('#name').val(); var age = $('#age').val(); var URL = "post.php"; $.ajax({ type:'POST', url: URL, datatype:'json', data:{'name': name ,'age': age}, success: function(data){ $('#resultDisplay').append("Value returned.<br />name: "+data.name+" age: "+data.age); }, error: function() { $('resultDisplay').append("ERROR!") } }); }); }); post.php <?php $name = $_POST['name']; $age = $_POST['age']; $return = array('name' => $name, 'age' => $age); echo json_encode($return); ?> After inputting the two fields and pressing 'Submit', the success method is called, text appended, but the values returned from ajax post are undefined. And then after less than a second, the text fields are emptied, and the text appended to the div is gone. Doesn't seem like it's a page refresh, though, since there's no empty page flash. What's going on here? I'm sure it's a silly mistake but Firebug isn't telling me anything.

    Read the article

  • jquery serialize()+custom?

    - by tazphoenix
    When using $.POST and $.GET in jquery is there any way to add custom vars to the URL and send them too? i tried the following : $.ajax({type:"POST", url:"file.php?CustomVar=data", data:$("#form").serialize()}); And : <input name="CustomVar" type="hidden" value="data" /> $.ajax({type:"POST", url:"file.php", data:$("#form").serialize()}); The first ones problem is that it send the custom as get but i want to receive it as post. The second one well i'm using it right now but there is not any better way?

    Read the article

  • IO::Pipe - close(<handle>) does not set $?

    - by danboo
    My understanding is that closing the handle for an IO::Pipe object should be done with the method ($fh->close) and not the built-in (close($fh)). The other day I goofed and used the built-in out of habit on a IO::Pipe object that was opened to a command that I expected to fail. I was surprised when $? was zero, and my error checking wasn't triggered. I realized my mistake. If I use the built-in, IO:Pipe can't perform the waitpid() and can't set $?. But what I was surprised by was that perl seemed to still close the pipe without setting $? via the core. I worked up a little test script to show what I mean: use 5.012; use warnings; use IO::Pipe; say 'init pipes:'; pipes(); my $fh = IO::Pipe->reader(q(false)); say 'post open pipes:'; pipes(); say 'return: ' . $fh->close; #say 'return: ' . close($fh); say 'status: ' . $?; say q(); say 'post close pipes:'; pipes(); sub pipes { for my $fd ( glob("/proc/self/fd/*") ) { say readlink($fd) if -p $fd; } say q(); } When using the method it shows the pipe being gone after the close and $? is set as I expected: init pipes: post open pipes: pipe:[992006] return: 1 status: 256 post close pipes: And, when using the built-in it also appears to close the pipe, but does not set $?: init pipes: post open pipes: pipe:[952618] return: 1 status: 0 post close pipes: It seems odd to me that the built-in results in the pipe closure, but doesn't set $?. Can anyone help explain the discrepancy? Thanks!

    Read the article

  • Windows 7 inbuilt and 3rd party (de)fragmentation related queries

    - by Karan
    I have a pretty good idea of how files end up getting fragmented. That said, I just copied ~3,200 files of varying sizes (from a few KB to ~20GB) from an external USB HDD to an internal, freshly formatted (under Windows 7 x64), NTFS, 2TB, 5400RPM, WD, SATA, non-system (i.e. secondary) drive, filling it up 57%. Since it should have been very much possible for each file to have been stored in one contiguous block, I expected the drive to be fragmented not more than 1-2% at most after this rather lengthy exercise (unfortunately this older machine doesn't support USB 3.0). Windows 7's inbuilt defrag utility told me after a quick analysis that the drive was fragmented only 1% or so, which dovetailed neatly with my expectations. However, just out of curiosity I downloaded and ran the latest portable x64 version of Piriform's Defraggler, and was shocked to see the drive being reported as being ~85% fragmented! The portable version of Auslogics Disk Defrag also agreed with Defraggler, and both clearly expected to grind away for ~10 hours to completely defragment the drive. 1) How in blazes could the inbuilt and 3rd party defrag utils disagree so badly? I mean, 10-20% variance is probably understandable, but 1% and 85% are miles apart! This Engineering Windows 7 blog post states: In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. ... [Please read the entire post so the quote is not taken out of context.] Could it simply be that the 3rd party defrag utils ignore this post-XP change and continue to use analysis algos similar to those XP used? 2) Assuming that the 3rd party utils aren't lying about the real extent of fragmentation (which Windows is downplaying post-XP), how could the files have even got fragmented so badly given they were just copied over afresh to an empty drive? 3) If vastly differing analysis algos explain the yawning gap, which do I believe? I'm no defrag fanatic for sure, but 85% is enough to make me seriously consider spending 10 hours defragging this drive. On the other hand, 1% reported by Windows' own defragger clearly implies that there is no cause for concern and defragging would actually have negative consequences (as per the post). Is Windows' assumption valid and should I just let it be, or will there be any noticeable performance gains after running one of the 3rd party utils for 10 hours straight? 4) I see that out of the box Windows 7 defrag is scheduled to run weekly. Does anyone know whether it defrags every single time, or only if its analysis reveals a fragmentation percentage over a set threshold? If the latter, what is this threshold and can it be changed, maybe via a Registry edit? Thanks for reading through (my first query on this wonderful site!) and for any helpful replies. Also, if you're answering question #3, please keep in mind that any speed increases post defragging with 3rd party utils vis-à-vis Windows' inbuilt program should not include pre-Vista (preferably pre-Win7) examples. Further, examples of programs that made your system boot faster won't help in this case, since this is a non-system drive (although one that'll still be used daily).

    Read the article

  • My Red Gate Experience

    - by Colin Rothwell
    I’m Colin, and I’ve been an intern working with Mike in publishing on Simple-Talk and SQLServerCentral for the past ten weeks. I’ve mostly been working “behind the scenes”, making improvements to the spam filtering, along with various other small tweaks. When I arrived at Red Gate, one of the first things Mike asked me was what I wanted to get out of the internship. It wasn’t a question I’d given a great deal of thought to, but my immediate response was the same as almost anybody: to support my growing family. Well, ok, not quite that, but money was certainly a motivator, along with simply making sure that I didn’t get bored over the summer. Three months is a long time to fill, and many of my friends end up getting bored, or worse, knitting obsessively. With the arrogance which seems fairly common among Cambridge people, I wasn’t expecting to really learn much here! In my mind, the part of the year where I am at Uni is the part where I learn things, whilst Red Gate would be an opportunity to apply what I’d learnt. Thankfully, the opposite is true: I’ve learnt a lot during my time here, and there has been a definite positive impact on the way I write code. The first thing I’ve really learnt is that test-driven development is, in general, a sensible way of working. Before coming, I didn’t really get it: how could you test something you hadn’t yet written? It didn’t make sense! My problem was seeing a test as having to test all the behaviour of a given function. Writing tests which test the bare minimum possible and building them up is a really good way of crystallising the direction the code needs to grow in, and ensures you never attempt to write too much code at time. One really good experience of this was early on in my internship when Mike and I were working on the query used to list active authors: I’d written something which I thought would do the trick, but by starting again using TDD we grew something which revealed that there were several subtle mistakes in the query I’d written. I’ve also been awakened to the value of pair programming. Whilst I could sort of see the point before coming, I also thought that it was impossible that two people would ever get more done at the same computer than if they were working separately. I still think that this is true for projects with pieces that developers can easily work on independently, and with developers who both know the codebase, but I’ve found that pair programming can be really good for learning a code base, and for building up small projects to the point where you can start working on separate components, as well as solving particularly difficult problems. Later on in my internship, for my down tools week project, I was working on adding Python support to Glimpse. Another intern and I we pair programmed the entire project, using ping pong pair programming as much as possible. One bonus that this brought which I wasn’t expecting was that I found myself less prone to distraction: with someone else peering over my shoulder, I didn’t have the ever-present temptation to open gmail, or facebook, or yammer, or twitter, or hacker news, or reddit, and so on, and so forth. I’m quite proud of this project: I think it’s some of the best code I’ve written. I’ve also been really won over to the value of descriptive variables names. In my pre-Red Gate life, as a lone-ranger style cowboy programmer, I’d developed a tendency towards laziness in variable names, sometimes abbreviating or, worse, using acronyms. I’ve swiftly realised that this is a bad idea when working with a team: saving a few key strokes is inevitably not worth it when it comes to reading code again in the future. Longer names also mean you can do away with a majority of comments. I appreciate that if you’ve come up with an O(n*log n) algorithm for something which seemed O(n^2), you probably want to explain how it works, but explaining what a variable name means is a big no no: it’s so very easy to change the behaviour of the code, whilst forgetting about the comments. Whilst at Red Gate, I took the opportunity to attend a code retreat, which really helped me to solidify all the things I’d learnt. To be completely free of any existing code base really lets you focus on best practises and think about how you write code. If you get a chance to go on a similar event, I’d highly recommend it! Cycling to Red Gate, I’ve also become much better at fitting inner tubes: if you’re struggling to get the tube out, or re-fit the tire, letting a bit of air out usually helps. I’ve also become quite a bit better at foosball and will miss having a foosball table! I’d like to finish off by saying thank you to everyone at Red Gate for having me. I’ve really enjoyed working with, and learning from, the team that brings you this web site. If you meet any of them, buy them a drink!

    Read the article

  • what to do with a flawed C++ skills test

    - by Mike Landis
    In the following gcc.gnu.org post, Nathan Myers says that a C++ skills test at SANS Consulting Services contained three errors in nine questions: Looking around, one of fthe first on-line C++ skills tests I ran across was: http://www.geekinterview.com/question_details/13090 I looked at question 1... find(int x,int y) { return ((x<y)?0:(x-y)):} call find(a,find(a,b)) use to find (a) maximum of a,b (b) minimum of a,b (c) positive difference of a,b (d) sum of a,b ... immediately wondering why would anyone write anything so obtuse. Getting past the absurdity, I didn't really like any of the answers, immediately eliminating (a) and (b) because you can get back zero (which is neither a nor b) in a variety of circumstances. Sum or difference seemed more likely, except that you could also get zero regardless of the magnitudes of a and b. So... I put Matlab to work (code below) and found: when either a or b is negative you get zero; when b a you get a; otherwise you get b, so the answer is (b) min(a,b), if a and b are positive, though strictly speaking the answer should be none of the above because there are no range restrictions on either variable. That forces test takers into a dilemma - choose the best available answer and be wrong in 3 of 4 quadrants, or don't answer, leaving the door open to the conclusion that the grader thinks you couldn't figure it out. The solution for test givers is to fix the test, but in the interim, what's the right course of action for test takers? Complain about the questions? function z = findfunc(x,y) for i=1:length(x) if x(i) < y(i) z(i) = 0; else z(i) = x(i) - y(i); end end end function [b,d1,z] = plotstuff() k = 50; a = [-k:1:k]; b = (2*k+1) * rand(length(a),1) - k; d1 = findfunc(a,b); z = findfunc(a,d1); plot( a, b, 'r.', a, d1, 'g-', a, z, 'b-'); end

    Read the article

  • Parsing SQLIO Output to Excel Charts using Regex in PowerShell

    - by Jonathan Kehayias
    Today Joe Webb ( Blog | Twitter ) blogged about The Power of Regex in Powershell, and in his post he shows how to parse the SQL Server Error Log for events of interest.  At the end of his blog post Joe asked about other places where Regular Expressions have been useful in PowerShell so I thought I’d blog my script for parsing SQLIO output using Regex in PowerShell, to populate an Excel worksheet and build charts based on the results automatically. If you’ve never used SQLIO, Brent Ozar ( Blog...(read more)

    Read the article

  • Parsing SQLIO Output to Excel Charts using Regex in PowerShell

    - by Jonathan Kehayias
    Today Joe Webb ( Blog | Twitter ) blogged about The Power of Regex in Powershell, and in his post he shows how to parse the SQL Server Error Log for events of interest. At the end of his blog post Joe asked about other places where Regular Expressions have been useful in PowerShell so I thought I’d blog my script for parsing SQLIO output using Regex in PowerShell, to populate an Excel worksheet and build charts based on the results automatically. If you’ve never used SQLIO, Brent Ozar ( Blog | Twitter...(read more)

    Read the article

  • Adding Client-Side events to DevExpress ASP.Net controls

    - by nikolaosk
    I have been involved in a ASP.Net project recently and I have implemented it using the awesome DevExpress ASP.Net controls. In this post I would like to show you how to use the client-side events that can make the user experience of your web application for the end user much better.We do avoid unnecessary page flickering and postbacks.All this functionality is possible through the magic of Ajax and Javascript.I am not going to cover Ajax and Javascript on this post. With the DevExpress ASP.net controls...(read more)

    Read the article

  • Using jQuery and OData to Insert a Database Record

    - by Stephen Walther
    In my previous blog entry, I explored two ways of inserting a database record using jQuery. We added a new Movie to the Movie database table by using a generic handler and by using a WCF service. In this blog entry, I want to take a brief look at how you can insert a database record using OData. Introduction to OData The Open Data Protocol (OData) was developed by Microsoft to be an open standard for communicating data across the Internet. Because the protocol is compatible with standards such as REST and JSON, the protocol is particularly well suited for Ajax. OData has undergone several name changes. It was previously referred to as Astoria and ADO.NET Data Services. OData is used by Sharepoint Server 2010, Azure Storage Services, Excel 2010, SQL Server 2008, and project code name “Dallas.” Because OData is being adopted as the public interface of so many important Microsoft technologies, it is a good protocol to learn. You can learn more about OData by visiting the following websites: http://www.odata.org http://msdn.microsoft.com/en-us/data/bb931106.aspx When using the .NET framework, you can easily expose database data through the OData protocol by creating a WCF Data Service. In this blog entry, I will create a WCF Data Service that exposes the Movie database table. Create the Database and Data Model The MoviesDB database is a simple database that contains the following Movies table: You need to create a data model to represent the MoviesDB database. In this blog entry, I use the ADO.NET Entity Framework to create my data model. However, WCF Data Services and OData are not tied to any particular OR/M framework such as the ADO.NET Entity Framework. For details on creating the Entity Framework data model for the MoviesDB database, see the previous blog entry. Create a WCF Data Service You create a new WCF Service by selecting the menu option Project, Add New Item and selecting the WCF Data Service item template (see Figure 1). Name the new WCF Data Service MovieService.svc. Figure 1 – Adding a WCF Data Service Listing 1 contains the default code that you get when you create a new WCF Data Service. There are two things that you need to modify. Listing 1 – New WCF Data Service File using System; using System.Collections.Generic; using System.Data.Services; using System.Data.Services.Common; using System.Linq; using System.ServiceModel.Web; using System.Web; namespace WebApplication1 { public class MovieService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } First, you need to replace the comment /* TODO: put your data source class name here */ with a class that represents the data that you want to expose from the service. In our case, we need to replace the comment with a reference to the MoviesDBEntities class generated by the Entity Framework. Next, you need to configure the security for the WCF Data Service. By default, you cannot query or modify the movie data. We need to update the Entity Set Access Rule to enable us to insert a new database record. The updated MovieService.svc is contained in Listing 2: Listing 2 – MovieService.svc using System.Data.Services; using System.Data.Services.Common; namespace WebApplication1 { public class MovieService : DataService<MoviesDBEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Movies", EntitySetRights.AllWrite); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } That’s all we have to do. We can now insert a new Movie into the Movies database table by posting a new Movie to the following URL: /MovieService.svc/Movies The request must be a POST request. The Movie must be represented as JSON. Using jQuery with OData The HTML page in Listing 3 illustrates how you can use jQuery to insert a new Movie into the Movies database table using the OData protocol. Listing 3 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>jQuery OData Insert</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { Title: $("#title").val(), Director: $("#director").val() }; // JSONify the data var data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Movies", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result var newMovie = result["d"]; // Show primary key alert("Movie added with primary key " + newMovie.Id); } </script> </body> </html> jQuery does not include a JSON serializer. Therefore, we need to include the JSON2 library to serialize the new Movie that we wish to create. The Movie is serialized by calling the JSON.stringify() method: var data = JSON.stringify(data); You can download the JSON2 library from the following website: http://www.json.org/js.html The jQuery ajax() method is called to insert the new Movie. Notice that both the contentType and dataType are set to use JSON. The jQuery ajax() method is used to perform a POST operation against the URL MovieService.svc/Movies. Because the POST payload contains a JSON representation of a new Movie, a new Movie is added to the database table of Movies. When the POST completes successfully, the insertCallback() method is called. The new Movie is passed to this method. The method simply displays the primary key of the new Movie: Summary The OData protocol (and its enabling technology named WCF Data Services) works very nicely with Ajax. By creating a WCF Data Service, you can quickly expose your database data to an Ajax application by taking advantage of open standards such as REST, JSON, and OData. In the next blog entry, I want to take a closer look at how the OData protocol supports different methods of querying data.

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >