Search Results

Search found 61506 results on 2461 pages for 'return value'.

Page 245/2461 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • New Features in ASP.NET Web API 2 - Part I

    - by dwahlin
    I’m a big fan of ASP.NET Web API. It provides a quick yet powerful way to build RESTful HTTP services that can easily be consumed by a variety of clients. While it’s simple to get started using, it has a wealth of features such as filters, formatters, and message handlers that can be used to extend it when needed. In this post I’m going to provide a quick walk-through of some of the key new features in version 2. I’ll focus on some two of my favorite features that are related to routing and HTTP responses and cover additional features in a future post.   Attribute Routing Routing has been a core feature of Web API since it’s initial release and something that’s built into new Web API projects out-of-the-box. However, there are a few scenarios where defining routes can be challenging such as nested routes (more on that in a moment) and any situation where a lot of custom routes have to be defined. For this example, let’s assume that you’d like to define the following nested route:   /customers/1/orders   This type of route would select a customer with an Id of 1 and then return all of their orders. Defining this type of route in the standard WebApiConfig class is certainly possible, but it isn’t the easiest thing to do for people who don’t understand routing well. Here’s an example of how the route shown above could be defined:   public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "CustomerOrdersApiGet", routeTemplate: "api/customers/{custID}/orders", defaults: new { custID = 0, controller = "Customers", action = "Orders" } ); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); GlobalConfiguration.Configuration.Formatters.Insert(0, new JsonpFormatter()); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   With attribute based routing, defining these types of nested routes is greatly simplified. To get started you first need to make a call to the new MapHttpAttributeRoutes() method in the standard WebApiConfig class (or a custom class that you may have created that defines your routes) as shown next:   public static class WebApiConfig { public static void Register(HttpConfiguration config) { // Allow for attribute based routes config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } Once attribute based routes are configured, you can apply the Route attribute to one or more controller actions. Here’s an example:   [HttpGet] [Route("customers/{custId:int}/orders")] public List<Order> Orders(int custId) { var orders = _Repository.GetOrders(custId); if (orders == null) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return orders; }   This example maps the custId route parameter to the custId parameter in the Orders() method and also ensures that the route parameter is typed as an integer. The Orders() method can be called using the following route: /customers/2/orders   While this is extremely easy to use and gets the job done, it doesn’t include the default “api” string on the front of the route that you might be used to seeing. You could add “api” in front of the route and make it “api/customers/{custId:int}/orders” but then you’d have to repeat that across other attribute-based routes as well. To simply this type of task you can add the RoutePrefix attribute above the controller class as shown next so that “api” (or whatever the custom starting point of your route is) is applied to all attribute routes: [RoutePrefix("api")] public class CustomersController : ApiController { [HttpGet] [Route("customers/{custId:int}/orders")] public List<Order> Orders(int custId) { var orders = _Repository.GetOrders(custId); if (orders == null) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return orders; } }   There’s much more that you can do with attribute-based routing in ASP.NET. Check out the following post by Mike Wasson for more details.   Returning Responses with IHttpActionResult The first version of Web API provided a way to return custom HttpResponseMessage objects which were pretty easy to use overall. However, Web API 2 now wraps some of the functionality available in version 1 to simplify the process even more. A new interface named IHttpActionResult (similar to ActionResult in ASP.NET MVC) has been introduced which can be used as the return type for Web API controller actions. To return a custom response you can use new helper methods exposed through ApiController such as: Ok NotFound Exception Unauthorized BadRequest Conflict Redirect InvalidModelState Here’s an example of how IHttpActionResult and the helper methods can be used to cleanup code. This is the typical way to return a custom HTTP response in version 1:   public HttpResponseMessage Delete(int id) { var status = _Repository.DeleteCustomer(id); if (status) { return new HttpResponseMessage(HttpStatusCode.OK); } else { throw new HttpResponseException(HttpStatusCode.NotFound); } } With version 2 we can replace HttpResponseMessage with IHttpActionResult and simplify the code quite a bit:   public IHttpActionResult Delete(int id) { var status = _Repository.DeleteCustomer(id); if (status) { //return new HttpResponseMessage(HttpStatusCode.OK); return Ok(); } else { //throw new HttpResponseException(HttpStatusCode.NotFound); return NotFound(); } } You can also cleanup post (insert) operations as well using the helper methods. Here’s a version 1 post action:   public HttpResponseMessage Post([FromBody]Customer cust) { var newCust = _Repository.InsertCustomer(cust); if (newCust != null) { var msg = new HttpResponseMessage(HttpStatusCode.Created); msg.Headers.Location = new Uri(Request.RequestUri + newCust.ID.ToString()); return msg; } else { throw new HttpResponseException(HttpStatusCode.Conflict); } } This is what the code looks like in version 2:   public IHttpActionResult Post([FromBody]Customer cust) { var newCust = _Repository.InsertCustomer(cust); if (newCust != null) { return Created<Customer>(Request.RequestUri + newCust.ID.ToString(), newCust); } else { return Conflict(); } } More details on IHttpActionResult and the different helper methods provided by the ApiController base class can be found here. Conclusion Although there are several additional features available in Web API 2 that I could cover (CORS support for example), this post focused on two of my favorites features. If you have .NET 4.5.1 available then I definitely recommend checking the new features out. Additional articles that cover features in ASP.NET Web API 2 can be found here.

    Read the article

  • Given an XML which contains a representation of a graph, how to apply it DFS algorithm? [on hold]

    - by winston smith
    Given the followin XML which is a directed graph: <?xml version="1.0" encoding="iso-8859-1" ?> <!DOCTYPE graph PUBLIC "-//FC//DTD red//EN" "../dtd/graph.dtd"> <graph direct="1"> <vertex label="V0"/> <vertex label="V1"/> <vertex label="V2"/> <vertex label="V3"/> <vertex label="V4"/> <vertex label="V5"/> <edge source="V0" target="V1" weight="1"/> <edge source="V0" target="V4" weight="1"/> <edge source="V5" target="V2" weight="1"/> <edge source="V5" target="V4" weight="1"/> <edge source="V1" target="V2" weight="1"/> <edge source="V1" target="V3" weight="1"/> <edge source="V1" target="V4" weight="1"/> <edge source="V2" target="V3" weight="1"/> </graph> With this classes i parsed the graph and give it an adjacency list representation: import java.io.IOException; import java.util.HashSet; import java.util.LinkedList; import java.util.Collection; import java.util.Iterator; import java.util.logging.Level; import java.util.logging.Logger; import practica3.util.Disc; public class ParsingXML { public static void main(String[] args) { try { // TODO code application logic here Collection<Vertex> sources = new HashSet<Vertex>(); LinkedList<String> lines = Disc.readFile("xml/directed.xml"); for (String lin : lines) { int i = Disc.find(lin, "source=\""); String data = ""; if (i > 0 && i < lin.length()) { while (lin.charAt(i + 1) != '"') { data += lin.charAt(i + 1); i++; } Vertex v = new Vertex(); v.setName(data); v.setAdy(new HashSet<Vertex>()); sources.add(v); } } Iterator it = sources.iterator(); while (it.hasNext()) { Vertex ver = (Vertex) it.next(); Collection<Vertex> adyacencias = ver.getAdy(); LinkedList<String> ls = Disc.readFile("xml/graphs.xml"); for (String lin : ls) { int i = Disc.find(lin, "target=\""); String data = ""; if (lin.contains("source=\""+ver.getName())) { Vertex v = new Vertex(); if (i > 0 && i < lin.length()) { while (lin.charAt(i + 1) != '"') { data += lin.charAt(i + 1); i++; } v.setName(data); } i = Disc.find(lin, "weight=\""); data = ""; if (i > 0 && i < lin.length()) { while (lin.charAt(i + 1) != '"') { data += lin.charAt(i + 1); i++; } v.setWeight(Integer.parseInt(data)); } if (v.getName() != null) { adyacencias.add(v); } } } } for (Vertex vert : sources) { System.out.println(vert); System.out.println("adyacencias: " + vert.getAdy()); } } catch (IOException ex) { Logger.getLogger(ParsingXML.class.getName()).log(Level.SEVERE, null, ex); } } } This is another class: import java.util.Collection; import java.util.Objects; public class Vertex { private String name; private int weight; private Collection ady; public Collection getAdy() { return ady; } public void setAdy(Collection adyacencias) { this.ady = adyacencias; } public String getName() { return name; } public void setName(String nombre) { this.name = nombre; } public int getWeight() { return weight; } public void setWeight(int weight) { this.weight = weight; } @Override public int hashCode() { int hash = 7; hash = 43 * hash + Objects.hashCode(this.name); hash = 43 * hash + this.weight; return hash; } @Override public boolean equals(Object obj) { if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final Vertex other = (Vertex) obj; if (!Objects.equals(this.name, other.name)) { return false; } if (this.weight != other.weight) { return false; } return true; } @Override public String toString() { return "Vertice{" + "name=" + name + ", weight=" + weight + '}'; } } And finally: /** * * @author user */ /* -*-jde-*- */ /* <Disc.java> Contains the main argument*/ import java.io.*; import java.util.LinkedList; /** * Lectura y escritura de archivos en listas de cadenas * Ideal para el uso de las clases para gráficas. * * @author Peralta Santa Anna Victor Miguel * @since Julio 2011 */ public class Disc { /** * Metodo para lectura de un archivo * * @param fileName archivo que se va a leer * @return El archivo en representacion de lista de cadenas */ public static LinkedList<String> readFile(String fileName) throws IOException { BufferedReader file = new BufferedReader(new FileReader(fileName)); LinkedList<String> textlist = new LinkedList<String>(); while (file.ready()) { textlist.add(file.readLine().trim()); } file.close(); /* for(String linea:textlist){ if(linea.contains("source")){ //String generado = linea.replaceAll("<\\w+\\s+\"", ""); //System.out.println(generado); } }*/ return textlist; }//readFile public static int find(String linea,String palabra){ int i,j; boolean found = false; for(i=0,j=0;i<linea.length();i++){ if(linea.charAt(i)==palabra.charAt(j)){ j++; if(j==palabra.length()){ found = true; return i; } }else{ continue; } } if(!found){ i= -1; } return i; } /** * Metodo para la escritura de un archivo * * @param fileName archivo que se va a escribir * @param tofile la lista de cadenas que quedaran en el archivo * @param append el bit que dira si se anexa el contenido o se empieza de cero */ public static void writeFile(String fileName, LinkedList<String> tofile, boolean append) throws IOException { FileWriter file = new FileWriter(fileName, append); for (int i = 0; i < tofile.size(); i++) { file.write(tofile.get(i) + "\n"); } file.close(); }//writeFile /** * Metodo para escritura de un archivo * @param msg archivo que se va a escribir * @param tofile la cadena que quedaran en el archivo * @param append el bit que dira si se anexa el contenido o se empieza de cero */ public static void writeFile(String msg, String tofile, boolean append) throws IOException { FileWriter file = new FileWriter(msg, append); file.write(tofile); file.close(); }//writeFile }// I'm stuck on what can be the best way to given an adjacency list representation of the graph how to apply it Depth-first search algorithm. Any idea of how to aproach to complete the task?

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • The Execute SQL Task

    In this article we are going to take you through the Execute SQL Task in SQL Server Integration Services for SQL Server 2005 (although it appies just as well to SQL Server 2008).  We will be covering all the essentials that you will need to know to effectively use this task and make it as flexible as possible. The things we will be looking at are as follows: A tour of the Task. The properties of the Task. After looking at these introductory topics we will then get into some examples. The examples will show different types of usage for the task: Returning a single value from a SQL query with two input parameters. Returning a rowset from a SQL query. Executing a stored procedure and retrieveing a rowset, a return value, an output parameter value and passing in an input parameter. Passing in the SQL Statement from a variable. Passing in the SQL Statement from a file. Tour Of The Task Before we can start to use the Execute SQL Task in our packages we are going to need to locate it in the toolbox. Let's do that now. Whilst in the Control Flow section of the package expand your toolbox and locate the Execute SQL Task. Below is how we found ours. Now drag the task onto the designer. As you can see from the following image we have a validation error appear telling us that no connection manager has been assigned to the task. This can be easily remedied by creating a connection manager. There are certain types of connection manager that are compatable with this task so we cannot just create any connection manager and these are detailed in a few graphics time. Double click on the task itself to take a look at the custom user interface provided to us for this task. The task will open on the general tab as shown below. Take a bit of time to have a look around here as throughout this article we will be revisting this page many times. Whilst on the general tab, drop down the combobox next to the ConnectionType property. In here you will see the types of connection manager which this task will accept. As with SQL Server 2000 DTS, SSIS allows you to output values from this task in a number of formats. Have a look at the combobox next to the Resultset property. The major difference here is the ability to output into XML. If you drop down the combobox next to the SQLSourceType property you will see the ways in which you can pass a SQL Statement into the task itself. We will have examples of each of these later on but certainly when we saw these for the first time we were very excited. Next to the SQLStatement property if you click in the empty box next to it you will see ellipses appear. Click on them and you will see the very basic query editor that becomes available to you. Alternatively after you have specified a connection manager for the task you can click on the Build Query button to bring up a completely different query editor. This is slightly inconsistent. Once you've finished looking around the general tab, move on to the next tab which is the parameter mapping tab. We shall, again, be visiting this tab throughout the article but to give you an initial heads up this is where you define the input, output and return values from your task. Note this is not where you specify the resultset. If however you now move on to the ResultSet tab this is where you define what variable will receive the output from your SQL Statement in whatever form that is. Property Expressions are one of the most amazing things to happen in SSIS and they will not be covered here as they deserve a whole article to themselves. Watch out for this as their usefulness will astound you. For a more detailed discussion of what should be the parameter markers in the SQL Statements on the General tab and how to map them to variables on the Parameter Mapping tab see Working with Parameters and Return Codes in the Execute SQL Task. Task Properties There are two places where you can specify the properties for your task. One is in the task UI itself and the other is in the property pane which will appear if you right click on your task and select Properties from the context menu. We will be doing plenty of property setting in the UI later so let's take a moment to have a look at the property pane. Below is a graphic showing our properties pane. Now we shall take you through all the properties and tell you exactly what they mean. A lot of these properties you will see across all tasks as well as the package because of everything's base structure The Container. BypassPrepare Should the statement be prepared before sending to the connection manager destination (True/False) Connection This is simply the name of the connection manager that the task will use. We can get this from the connection manager tray at the bottom of the package. DelayValidation Really interesting property and it tells the task to not validate until it actually executes. A usage for this may be that you are operating on table yet to be created but at runtime you know the table will be there. Description Very simply the description of your Task. Disable Should the task be enabled or not? You can also set this through a context menu by right clicking on the task itself. DisableEventHandlers As a result of events that happen in the task, should the event handlers for the container fire? ExecValueVariable The variable assigned here will get or set the execution value of the task. Expressions Expressions as we mentioned earlier are a really powerful tool in SSIS and this graphic below shows us a small peek of what you can do. We select a property on the left and assign an expression to the value of that property on the right causing the value to be dynamically changed at runtime. One of the most obvious uses of this is that the property value can be built dynamically from within the package allowing you a great deal of flexibility FailPackageOnFailure If this task fails does the package? FailParentOnFailure If this task fails does the parent container? A task can he hosted inside another container i.e. the For Each Loop Container and this would then be the parent. ForcedExecutionValue This property allows you to hard code an execution value for the task. ForcedExecutionValueType What is the datatype of the ForcedExecutionValue? ForceExecutionResult Force the task to return a certain execution result. This could then be used by the workflow constraints. Possible values are None, Success, Failure and Completion. ForceExecutionValue Should we force the execution result? IsolationLevel This is the transaction isolation level of the task. IsStoredProcedure Certain optimisations are made by the task if it knows that the query is a Stored Procedure invocation. The docs say this will always be false unless the connection is an ADO connection. LocaleID Gets or sets the LocaleID of the container. LoggingMode Should we log for this container and what settings should we use? The value choices are UseParentSetting, Enabled and Disabled. MaximumErrorCount How many times can the task fail before we call it a day? Name Very simply the name of the task. ResultSetType How do you want the results of your query returned? The choices are ResultSetType_None, ResultSetType_SingleRow, ResultSetType_Rowset and ResultSetType_XML. SqlStatementSource Your Query/SQL Statement. SqlStatementSourceType The method of specifying the query. Your choices here are DirectInput, FileConnection and Variables TimeOut How long should the task wait to receive results? TransactionOption How should the task handle being asked to join a transaction? Usage Examples As we move through the examples we will only cover in them what we think you must know and what we think you should see. This means that some of the more elementary steps like setting up variables will be covered in the early examples but skipped and simply referred to in later ones. All these examples used the AventureWorks database that comes with SQL Server 2005. Returning a Single Value, Passing in Two Input Parameters So the first thing we are going to do is add some variables to our package. The graphic below shows us those variables having been defined. Here the CountOfEmployees variable will be used as the output from the query and EndDate and StartDate will be used as input parameters. As you can see all these variables have been scoped to the package. Scoping allows us to have domains for variables. Each container has a scope and remember a package is a container as well. Variable values of the parent container can be seen in child containers but cannot be passed back up to the parent from a child. Our following graphic has had a number of changes made. The first of those changes is that we have created and assigned an OLEDB connection manager to this Task ExecuteSQL Task Connection. The next thing is we have made sure that the SQLSourceType property is set to Direct Input as we will be writing in our statement ourselves. We have also specified that only a single row will be returned from this query. The expressions we typed in was: SELECT COUNT(*) AS CountOfEmployees FROM HumanResources.Employee WHERE (HireDate BETWEEN ? AND ?) Moving on now to the Parameter Mapping tab this is where we are going to tell the task about our input paramaters. We Add them to the window specifying their direction and datatype. A quick word here about the structure of the variable name. As you can see SSIS has preceeded the variable with the word user. This is a default namespace for variables but you can create your own. When defining your variables if you look at the variables window title bar you will see some icons. If you hover over the last one on the right you will see it says "Choose Variable Columns". If you click the button you will see a list of checkbox options and one of them is namespace. after checking this you will see now where you can define your own namespace. The next tab, result set, is where we need to get back the value(s) returned from our statement and assign to a variable which in our case is CountOfEmployees so we can use it later perhaps. Because we are only returning a single value then if you remember from earlier we are allowed to assign a name to the resultset but it must be the name of the column (or alias) from the query. A really cool feature of Business Intelligence Studio being hosted by Visual Studio is that we get breakpoint support for free. In our package we set a Breakpoint so we can break the package and have a look in a watch window at the variable values as they appear to our task and what the variable value of our resultset is after the task has done the assignment. Here's that window now. As you can see the count of employess that matched the data range was 2. Returning a Rowset In this example we are going to return a resultset back to a variable after the task has executed not just a single row single value. There are no input parameters required so the variables window is nice and straight forward. One variable of type object. Here is the statement that will form the soure for our Resultset. select p.ProductNumber, p.name, pc.Name as ProductCategoryNameFROM Production.ProductCategory pcJOIN Production.ProductSubCategory pscON pc.ProductCategoryID = psc.ProductCategoryIDJOIN Production.Product pON psc.ProductSubCategoryID = p.ProductSubCategoryID We need to make sure that we have selected Full result set as the ResultSet as shown below on the task's General tab. Because there are no input parameters we can skip the parameter mapping tab and move straight to the Result Set tab. Here we need to Add our variable defined earlier and map it to the result name of 0 (remember we covered this earlier) Once we run the task we can again set a breakpoint and have a look at the values coming back from the task. In the following graphic you can see the result set returned to us as a COM object. We can do some pretty interesting things with this COM object and in later articles that is exactly what we shall be doing. Return Values, Input/Output Parameters and Returning a Rowset from a Stored Procedure This example is pretty much going to give us a taste of everything. We have already covered in the previous example how to specify the ResultSet to be a Full result set so we will not cover it again here. For this example we are going to need 4 variables. One for the return value, one for the input parameter, one for the output parameter and one for the result set. Here is the statement we want to execute. Note how much cleaner it is than if you wanted to do it using the current version of DTS. In the Parameter Mapping tab we are going to Add our variables and specify their direction and datatypes. In the Result Set tab we can now map our final variable to the rowset returned from the stored procedure. It really is as simple as that and we were amazed at how much easier it is than in DTS 2000. Passing in the SQL Statement from a Variable SSIS as we have mentioned is hugely more flexible than its predecessor and one of the things you will notice when moving around the tasks and the adapters is that a lot of them accept a variable as an input for something they need. The ExecuteSQL task is no different. It will allow us to pass in a string variable as the SQL Statement. This variable value could have been set earlier on from inside the package or it could have been populated from outside using a configuration. The ResultSet property is set to single row and we'll show you why in a second when we look at the variables. Note also the SQLSourceType property. Here's the General Tab again. Looking at the variable we have in this package you can see we have only two. One for the return value from the statement and one which is obviously for the statement itself. Again we need to map the Result name to our variable and this can be a named Result Name (The column name or alias returned by the query) and not 0. The expected result into our variable should be the amount of rows in the Person.Contact table and if we look in the watch window we see that it is.   Passing in the SQL Statement from a File The final example we are going to show is a really interesting one. We are going to pass in the SQL statement to the task by using a file connection manager. The file itself contains the statement to run. The first thing we are going to need to do is create our file connection mananger to point to our file. Click in the connections tray at the bottom of the designer, right click and choose "New File Connection" As you can see in the graphic below we have chosen to use an existing file and have passed in the name as well. Have a look around at the other "Usage Type" values available whilst you are here. Having set that up we can now see in the connection manager tray our file connection manager sitting alongside our OLE-DB connection we have been using for the rest of these examples. Now we can go back to the familiar General Tab to set up how the task will accept our file connection as the source. All the other properties in this task are set up exactly as we have been doing for other examples depending on the options chosen so we will not cover them again here.   We hope you will agree that the Execute SQL Task has changed considerably in this release from its DTS predecessor. It has a lot of options available but once you have configured it a few times you get to learn what needs to go where. We hope you have found this article useful.

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Custom ViewModel with MVC 2 Strongly Typed HTML Helpers return null object on Create ?

    - by Barbaros Alp
    Hi, I am having a trouble while trying to create an entity with a custom view modeled create form. Below is my custom view model for Category Creation form. public class CategoryFormViewModel { public CategoryFormViewModel(Category category, string actionTitle) { Category = category; ActionTitle = actionTitle; } public Category Category { get; private set; } public string ActionTitle { get; private set; } } and this is my user control where the UI is <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<CategoryFormViewModel>" %> <h2> <span><%= Html.Encode(Model.ActionTitle) %></span> </h2> <%=Html.ValidationSummary() %> <% using (Html.BeginForm()) {%> <p> <span class="bold block">Baslik:</span> <%=Html.TextBoxFor(model => Model.Category.Title, new { @class = "width80 txt-base" })%> </p> <p> <span class="bold block">Sira Numarasi:</span> <%=Html.TextBoxFor(model => Model.Category.OrderNo, new { @class = "width10 txt-base" })%> </p> <p> <input type="submit" class="btn-admin cursorPointer" value="Save" /> </p> <% } %> When i click on save button, it doesnt bind the category for me because of i am using custom view model and strongly typed html helpers like that <%=Html.TextBoxFor(model => Model.Category.OrderNo) %> How can i fix this ? Thanks in advance

    Read the article

  • BASH statements execute alone but return "no such file" in for loop.

    - by reve_etrange
    Another one I can't find an answer for, and it feels like I've gone mad. I have a BASH script using a for loop to run a complex command (many protein sequence alignments) on a lot of files (~5000). The loop produces statements that will execute when given alone (i.e. copy-pasted from the error message to the command prompt), but which return "no such file or directory" inside the loop. Script below; there are actually several more arguments but this includes some representative ones and the file arguments. #!/bin/bash # Pass directory with targets as FASTA sequences as argument. # Arguments to psiblast # Common db=local/db/nr/nr outfile="/mnt/scratch/psi-blast" e=0.001 threads=8 itnum=5 pssm="/mnt/scratch/psi-blast/pssm." pssm_txt="/mnt/scratch/psi-blast/pssm." pseudo=0 pwa_inclusion=0.002 for i in ${1}/* do filename=$(basename $i) "local/ncbi-blast-2.2.23+/bin/psiblast\ -query ${i}\ -db $db\ -out ${outfile}/${filename}.out\ -evalue $e\ -num_threads $threads\ -num_iterations $itnum\ -out_pssm ${pssm}$filename\ -out_ascii_pssm ${pssm_txt}${filename}.txt\ -pseudocount $pseudo\ -inclusion_ethresh $pwa_inclusion" done Running this scripts gives "<scriptname> line <last line before 'done'>: <attempted command> : No such file or directory. If I then paste the attempted command onto the prompt it will run. Each of these commands takes a couple of minutes to run.

    Read the article

  • JSF: How to get the selected item from selectOneMenu if its rendering is dynamic?

    - by Dzmitry Zhaleznichenka
    At my view I have two menus that I want to make dependent, namely, if first menu holds values "render second menu" and "don't render second menu", I want second menu to be rendered only if user selects "render second menu" option in the first menu. After second menu renders at the same page as the first one, user has to select current item from it, fill another fields and submit the form to store values in database. Both the lists of options are static, they are obtained from the database once and stay the same. My problem is I always get null as a value of the selected item from second menu. How to get a proper value? The sample code of view that holds problematic elements is: <h:selectOneMenu id="renderSecond" value="#{Bean.renderSecondValue}" valueChangeListener="#{Bean.updateDependentMenus}" immediate="true" onchange="this.form.submit();" > <f:selectItems value="#{Bean.typesOfRendering}" /> </h:selectOneMenu><br /> <h:selectOneMenu id="iWillReturnYouZeroAnyway" value="#{Bean.currentItem}" rendered="#{Bean.rendered}" > <f:selectItems value="#{Bean.items}" /> </h:selectOneMenu><br /> <h:commandButton action="#{Bean.store}" value="#Store" /> However, if I remove "rendered" attribute from the second menu, everything works properly, except for displaying the menu for all the time that I try to prevent, so I suggest the problem is in behavior of dynamic rendering. The initial value of isRendered is false because the default item in first menu is "don't render second menu". When I change value in first menu and update isRendered with valueChangeListener, the second menu displays but it doesn't initialize currentItem while submitting. Some code from my backing bean is below: public void updateDependentMenus(ValueChangeEvent value) { String newValue = (String) value.getNewValue(); if ("render second menu" == newValue){ isRendered = true; } else { isRendered = false; } } public String store(){ System.out.println(currentItem); return "stored"; }

    Read the article

  • Is there a better way to create a generic convert string to enum method or enum extension?

    - by Kelsey
    I have the following methods in an enum helper class (I have simplified it for the purpose of the question): static class EnumHelper { public enum EnumType1 : int { Unknown = 0, Yes = 1, No = 2 } public enum EnumType2 : int { Unknown = 0, Dog = 1, Cat = 2, Bird = 3 } public enum EnumType3 : int { Unknown = 0, iPhone = 1, Andriod = 2, WindowsPhone7 = 3, Palm = 4 } public static EnumType1 ConvertToEnumType1(string value) { return (string.IsNullOrEmpty(value)) ? EnumType1.Unknown : (EnumType1)(Enum.Parse(typeof(EnumType1), value, true)); } public static EnumType2 ConvertToEnumType2(string value) { return (string.IsNullOrEmpty(value)) ? EnumType2.Unknown : (EnumType2)(Enum.Parse(typeof(EnumType2), value, true)); } public static EnumType3 ConvertToEnumType3(string value) { return (string.IsNullOrEmpty(value)) ? EnumType3.Unknown : (EnumType3)(Enum.Parse(typeof(EnumType3), value, true)); } } So the question here is, can I trim this down to an Enum extension method or maybe some type of single method that can handle any type. I have found some examples to do so with basic enums but the difference in my example is all the enums have the Unknown item that I need returned if the string is null or empty (if no match is found I want it to fail). Looking for something like the following maybe: EnumType1 value = EnumType1.Convert("Yes"); // or EnumType1 value = EnumHelper.Convert(EnumType1, "Yes"); One function to do it all... how to handle the Unknown element is the part that I am hung up on.

    Read the article

  • Rails Autocompletion Issue - Rails 1.2.3 to 2.3.5

    - by Grant Sayer
    I have an issue with rails Autocompletion from some code that i've inherited from an old Rails 1.2.3 project that I'm porting to Rails 2.3.5. The issue revolves around javascript execution within the auto_complete helper :after_update_element. The scenario is: A user is presented with a popup form with a number of fields. In the first field as they enter text the auto_complete AJAX call occurs, returning a result, plus a series of other HTML data wrapped in <divs> so that the after_update_element call can iterate over the other data and fill in the remaining fields. The issue lies with the extraction of the other fields which works on IE, fails on Firefox. Here is the code: <%= text_field_with_auto_complete :item, :product_code, {:value => ""}, {:size => 40, :class => "input-text", :tabindex => 6, :select => 'code', :with => "element.name + '=' + escape(element.value) + '&supplier_id=' + $('item_supplier_id').value", :after_update_element => "function (ele, value) { $('item_supplier_id').value = Utilities.extract_value(value, 'supplier_id'); $('item_supplied_size').value = Utilities.extract_value(value, 'size')}"}%> Now the function Utilities is designed to grab the fields from the string of values and looks like: // // Extract a particular set of data from the autocomplete actions // Utilities.extract_value = function (value, className) { var result; var elements = document.getElementsByClassName(className, value); if (elements && elements.length == 1) { result = elements[0].innerHTML.unescapeHTML(); } return result; }; In Firefox the value of result is undefined

    Read the article

  • Map enum in JPA with fixed values ?

    - by Kartoch
    I'm looking for the different ways to map an enum using JPA. I especially want to set the integer value of each enum entry and to save only the integer value. @Entity @Table(name = "AUTHORITY_") public class Authority implements Serializable { public enum Right { READ(100), WRITE(200), EDITOR (300); private int value; Right(int value) { this.value = value; } public int getValue() { return value; } }; @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "AUTHORITY_ID") private Long id; // the enum to map : private Right right; } A simple solution is to use the Enumerated annotation with EnumType.ORDINAL: @Column(name = "RIGHT") @Enumerated(EnumType.ORDINAL) private Right right; But in this case JPA maps the enum index (0,1,2) and not the value I want (100,200,300). Th two solutions I found do not seem simple... First Solution A solution, proposed here, uses @PrePersist and @PostLoad to convert the enum to an other field and mark the enum field as transient: @Basic private int intValueForAnEnum; @PrePersist void populateDBFields() { intValueForAnEnum = right.getValue(); } @PostLoad void populateTransientFields() { right = Right.valueOf(intValueForAnEnum); } Second Solution The second solution proposed here proposed a generic conversion object, but still seems heavy and hibernate-oriented (@Type doesn't seem to exist in JEE): @Type( type = "org.appfuse.tutorial.commons.hibernate.GenericEnumUserType", parameters = { @Parameter( name = "enumClass", value = "Authority$Right"), @Parameter( name = "identifierMethod", value = "toInt"), @Parameter( name = "valueOfMethod", value = "fromInt") } ) Is there any other solutions ? I've several ideas in mind but I don't know if they exist in JPA: use the setter and getter methods of right member of Authority Class when loading and saving the Authority object an equivalent idea would be to tell JPA what are the methods of Right enum to convert enum to int and int to enum Because I'm using Spring, is there any way to tell JPA to use a specific converter (RightEditor) ?

    Read the article

  • Should a Perl constructor return an undef or a "invalid" object?

    - by DVK
    Question: What is considered to be "Best practice" - and why - of handling errors in a constructor?. "Best Practice" can be a quote from Schwartz, or 50% of CPAN modules use it, etc...; but I'm happy with well reasoned opinion from anyone even if it explains why the common best practice is not really the best approach. As far as my own view of the topic (informed by software development in Perl for many years), I have seen three main approaches to error handling in a perl module (listed from best to worst in my opinion): Construct an object, set an invalid flag (usually "is_valid" method). Often coupled with setting error message via your class's error handling. Pros: Allows for standard (compared to other method calls) error handling as it allows to use $obj->errors() type calls after a bad constructor just like after any other method call. Allows for additional info to be passed (e.g. 1 error, warnings, etc...) Allows for lightweight "redo"/"fixme" functionality, In other words, if the object that is constructed is very heavy, with many complex attributes that are 100% always OK, and the only reason it is not valid is because someone entered an incorrect date, you can simply do "$obj->setDate()" instead of the overhead of re-executing entire constructor again. This pattern is not always needed, but can be enormously useful in the right design. Cons: None that I'm aware of. Return "undef". Cons: Can not achieve any of the Pros of the first solution (per-object error messages outside of global variables and lightweight "fixme" capability for heavy objects). Die inside the constructor. Outside of some very narrow edge cases, I personally consider this an awful choice for too many reasons to list on the margins of this question. UPDATE: Just to be clear, I consider the (otherwise very worthy and a great design) solution of having very simple constructor that can't fail at all and a heavy initializer method where all the error checking occurs to be merely a subset of either case #1 (if initializer sets error flags) or case #3 (if initializer dies) for the purposes of this question. Obviously, choosing such a design, you automatically reject option #2.

    Read the article

  • Synchronization of Nested Data Structures between Threads in Java

    - by Dominik
    I have a cache implementation like this: class X { private final Map<String, ConcurrentMap<String, String>> structure = new HashMap...(); public String getValue(String context, String id) { // just assume for this example that there will be always an innner map final ConcurrentMap<String, String> innerStructure = structure.get(context); String value = innerStructure.get(id); if(value == null) { synchronized(structure) { // can I be sure, that this inner map will represent the last updated // state from any thread? value = innerStructure.get(id); if(value == null) { value = getValueFromSomeSlowSource(id); innerStructure.put(id, value); } } } return value; } } Is this implementation thread-safe? Can I be sure to get the last updated state from any thread inside the synchronized block? Would this behaviour change if I use a java.util.concurrent.ReentrantLock instead of a synchronized block, like this: ... if(lock.tryLock(3, SECONDS)) { try { value = innerStructure.get(id); if(value == null) { value = getValueFromSomeSlowSource(id); innerStructure.put(id, value); } } finally { lock.unlock(); } } ... I know that final instance members are synchronized between threads, but is this also true for the objects held by these members? Maybe this is a dumb question, but I don't know how to test it to be sure, that it works on every OS and every architecture.

    Read the article

  • Twitter Typeahead only shows only 5 results

    - by user3685388
    I'm using the Twitter Typeahead version 0.10.2 autocomplete but I'm only receiving 5 results from my JSON result set. I can have 20 or more results but only 5 are shown. What am I doing wrong? var engine = new Bloodhound({ name: "blackboard-names", prefetch: { url: "../CFC/Login.cfc?method=Search&returnformat=json&term=%QUERY", ajax: { contentType: "json", cache: false } }, remote: { url: "../CFC/Login.cfc?method=Search&returnformat=json&term=%QUERY", ajax: { contentType: "json", cache: false }, }, datumTokenizer: Bloodhound.tokenizers.obj.whitespace('value'), queryTokenizer: Bloodhound.tokenizers.whitespace }); var promise = engine.initialize(); promise .done(function() { console.log("done"); }) .fail(function() { console.log("fail"); }); $("#Impersonate").typeahead({ minLength: 2, highlight: true}, { name: "blackboard-names", displayKey: 'value', source: engine.ttAdapter() }).bind("typeahead:selected", function(obj, datum, name) { console.log(obj, datum, name); alert(datum.id); }); Data: [ { "id": "1", "value": "Adams, Abigail", "tokens": [ "Adams", "A", "Ad", "Ada", "Abigail", "A", "Ab", "Abi" ] }, { "id": "2", "value": "Adams, Alan", "tokens": [ "Adams", "A", "Ad", "Ada", "Alan", "A", "Al", "Ala" ] }, { "id": "3", "value": "Adams, Alison", "tokens": [ "Adams", "A", "Ad", "Ada", "Alison", "A", "Al", "Ali" ] }, { "id": "4", "value": "Adams, Amber", "tokens": [ "Adams", "A", "Ad", "Ada", "Amber", "A", "Am", "Amb" ] }, { "id": "5", "value": "Adams, Amelia", "tokens": [ "Adams", "A", "Ad", "Ada", "Amelia", "A", "Am", "Ame" ] }, { "id": "6", "value": "Adams, Arik", "tokens": [ "Adams", "A", "Ad", "Ada", "Arik", "A", "Ar", "Ari" ] }, { "id": "7", "value": "Adams, Ashele", "tokens": [ "Adams", "A", "Ad", "Ada", "Ashele", "A", "As", "Ash" ] }, { "id": "8", "value": "Adams, Brady", "tokens": [ "Adams", "A", "Ad", "Ada", "Brady", "B", "Br", "Bra" ] }, { "id": "9", "value": "Adams, Brandon", "tokens": [ "Adams", "A", "Ad", "Ada", "Brandon", "B", "Br", "Bra" ] } ]

    Read the article

  • Project Euler 7 Scala Problem

    - by Nishu
    I was trying to solve Project Euler problem number 7 using scala 2.8 First solution implemented by me takes ~8 seconds def problem_7:Int = { var num = 17; var primes = new ArrayBuffer[Int](); primes += 2 primes += 3 primes += 5 primes += 7 primes += 11 primes += 13 while (primes.size < 10001){ if (isPrime(num, primes)) primes += num if (isPrime(num+2, primes)) primes += num+2 num += 6 } return primes.last; } def isPrime(num:Int, primes:ArrayBuffer[Int]):Boolean = { // if n == 2 return false; // if n == 3 return false; var r = Math.sqrt(num) for (i <- primes){ if(i <= r ){ if (num % i == 0) return false; } } return true; } Later I tried the same problem without storing prime numbers in array buffer. This take .118 seconds. def problem_7_alt:Int = { var limit = 10001; var count = 6; var num:Int = 17; while(count < limit){ if (isPrime2(num)) count += 1; if (isPrime2(num+2)) count += 1; num += 6; } return num; } def isPrime2(n:Int):Boolean = { // if n == 2 return false; // if n == 3 return false; var r = Math.sqrt(n) var f = 5; while (f <= r){ if (n % f == 0) { return false; } else if (n % (f+2) == 0) { return false; } f += 6; } return true; } I tried using various mutable array/list implementations in Scala but was not able to make solution one faster. I do not think that storing Int in a array of size 10001 can make program slow. Is there some better way to use lists/arrays in scala?

    Read the article

  • decimal.TryParse() drops leading "1"

    - by Martin Harris
    Short and sweet version: On one machine out of around a hundred test machines decimal.TryParse() is converting "1.01" to 0.01 Okay, this is going to sound crazy but bare with me... We have a client applications that communicates with a webservice through JSON, and that service returns a decimal value as a string so we store it as a string in our model object: [DataMember(Name = "value")] public string Value { get; set; } When we display that value on screen it is formatted to a specific number of decimal places. So the process we use is string - decimal then decimal - string. The application is currently undergoing final testing and is running on more than 100 machines, where this all works fine. However on one machine if the decimal value has a leading '1' then it is replaced by a zero. I added simple logging to the code so it looks like this: Log("Original string value: {0}", value); decimal val; if (decimal.TryParse(value, out val)) { Log("Parsed decimal value: {0}", val); string output = val.ToString(format, CultureInfo.InvariantCulture.NumberFormat); Log("Formatted string value: {0}", output); return output; } On my machine - any every other client machine - the logfile output is: Original string value: 1.010000 Parsed decimal value: 1.010000 Formatted string value: 1.01 On the defective machine the output is: Original string value: 1.010000 Parsed decimal value: 0.010000 Formatted string value: 0.01 So it would appear that the decimal.TryParse method is at fault. Things we've tried: Uninstalling and reinstalling the client application Uninstalling and reinstalling .net 3.5 sp1 Comparing the defective machine's regional settings for numbers (using English (United Kingdom)) to those of a working machine - no differences. Has anyone seen anything like this or has any suggestions? I'm quickly running out of ideas... While I was typing this some more info came in: Passing a string value of "10000" to Convert.ToInt32() returns 0, so that also seems to drop the leading 1.

    Read the article

  • How to show data in dataTable lable:data in jsf

    - by palakolanusrinu
    Hi How to display lable: data name: srinu in multiple rows using dataTable in jsf right now i'm getting like | lable | data| data srinu i want it in this formate lable: data name: srinu code which used is <h:dataTable id="fundInfo" value="#{clientFundInfo}" border="1" var="client" first="0" rows="5" rules="all"> <h:column> <h:outputText value="CLIENT:"/> <h:outputText value="#{client.clientName}"></h:outputText> </h:column> <h:column> <h:outputText value="FUND:"/> <h:outputText value="#{client.fundName}"></h:outputText> </h:column> <h:column> <h:outputText value="Employer Identification Number:"/> <h:outputText value="#{client.empIdentificationNum}"></h:outputText> </h:column> <h:column><h:outputText value="FISCAL YEAR ENDED:"/> <h:outputText value="#{client.fye}"></h:outputText> </h:column> <h:column><h:outputText value="Shares Outstanding"/> <h:outputText value="#{client.sharesOutstanding}"></h:outputText> </h:column> </h:dataTable>

    Read the article

  • ASP.NET MVC on Cassini: How can I force the "content" directory to return 304s instead of 200s?

    - by Portman
    Scenario: I have an ASP.NET MVC application developed in Visual Studio 2008. There is a root folder named "Content" that stores images and stylesheets. When I run locally (using Cassini) and browse my application, every resource from the "Content" directory is always downloaded. Using Firebug, I can verify that the web server returns an HTTP 200 ("ok"). Desired: I would like for Cassini to return HTTP 304 ("not modified") instead of 200. This is the behavior when running the site under IIS7. Reasoning: The site I am working on has a large number of static resources (often as many as 40 per page). Browsing the site is very fast on IIS7, because these resources are (correctly) cached by the browser. However, browsing the site on my local machine is painfully slow. Pages that render in under 1 second on IIS7 take over 30 seconds to render on Cassini. It's actually faster for me to upload the entire website every few minutes and test from there. (Yes, I recognize that this is perverse and crazy.) So: how can I instruct/trick Cassini into treating the "Content" directory like IIS7 does?

    Read the article

  • How can I get this dynamic WHERE statement in my LINQ-to-XML to work?

    - by Edward Tanguay
    In this question Jon Skeet offered a very interesting solution to making a LINQ-to-XML statement dynamic, but my knowledge of lambdas and delegates is not yet advanced enough to implement it: I've got it this far, but of course I get the error "smartForm does not exist in the current context": private void LoadWithId(int id) { XDocument xmlDoc = null; try { xmlDoc = XDocument.Load(FullXmlDataStorePathAndFileName); } catch (Exception ex) { throw new Exception(String.Format("Cannot load XML file: {0}", ex.Message)); } Func<XElement, bool> whereClause = (int)smartForm.Element("id") == id"; var smartForms = xmlDoc.Descendants("smartForm") .Where(whereClause) .Select(smartForm => new SmartForm { Id = (int)smartForm.Element("id"), WhenCreated = (DateTime)smartForm.Element("whenCreated"), ItemOwner = smartForm.Element("itemOwner").Value, PublishStatus = smartForm.Element("publishStatus").Value, CorrectionOfId = (int)smartForm.Element("correctionOfId"), IdCode = smartForm.Element("idCode").Value, Title = smartForm.Element("title").Value, Description = smartForm.Element("description").Value, LabelWidth = (int)smartForm.Element("labelWidth") }); foreach (SmartForm smartForm in smartForms) { _collection.Add(smartForm); } } Ideally I want to be able to just say: var smartForms = GetSmartForms(smartForm=> (int) smartForm.Element("DisplayOrder").Value > 50); I've got it this far, but I'm just not grokking the lambda magic, how do I do this? public List<SmartForm> GetSmartForms(XDocument xmlDoc, XElement whereClause) { var smartForms = xmlDoc.Descendants("smartForm") .Where(whereClause) .Select(smartForm => new SmartForm { Id = (int)smartForm.Element("id"), WhenCreated = (DateTime)smartForm.Element("whenCreated"), ItemOwner = smartForm.Element("itemOwner").Value, PublishStatus = smartForm.Element("publishStatus").Value, CorrectionOfId = (int)smartForm.Element("correctionOfId"), IdCode = smartForm.Element("idCode").Value, Title = smartForm.Element("title").Value, Description = smartForm.Element("description").Value, LabelWidth = (int)smartForm.Element("labelWidth") }); }

    Read the article

  • PLPGSQL : How to return a record from function executed by INSERT/UPDATE rule?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • How to enable logs for sitemesh

    - by atomsfat
    Is ther any form to enable logs for sitemesh ? I already put this in the log4j configuration but it doesn't work <!-- Appenders --> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%-5p: %c - %m%n" /> </layout> </appender> <logger name="com.opensymphony"> <level value="debug"/> </logger> <logger name="org.springframework.beans"> <level value="warn" /> </logger> <logger name="org.springframework.binding"> <level value="debug" /> </logger> <logger name="org.springframework.jdbc"> <level value="warn" /> </logger> <logger name="org.springframework.transaction"> <level value="warn" /> </logger> <logger name="org.springframework.orm"> <level value="warn" /> </logger> <logger name="org.springframework.web"> <level value="debug" /> </logger> <logger name="org.springframework.webflow"> <level value="debug" /> </logger> <!-- Root Logger --> <root> <priority value="warn" /> <appender-ref ref="console" /> </root>

    Read the article

  • PHP Form Sending Information to Limbo!

    - by drew
    I was told my client's quote form has not been generating very many emails. I have learned that although the form brings you to a confirmation page, the information never reaches the recipient. I have altered the code so it goes to my office email for testing purposes. If I post code for the form elements below, would someone be able to spot what the problem might be? Thank you very much! Link to the quote page is http://autoglass-plus.com/quote.php First is the form itself: <form id="quoteForm" name="form" action="form/index.php" method="post"> <fieldset> <p> <strong>Contact Information:</strong><br /> </p> <div> <label for="firstname">First Name:<br /> </label> <input type="text" size="30" name="firstname" class="txt" id="firstname" /> </div> <div> <label for="lastname">Last Name:<br /> </label> <input type="text" size="30" name="lastname" class="txt" id="lastname" /> </div> <div> <label for="address">Address:<br /> </label> <input type="text" size="30" name="address" class="txt" id="address" /> </div> <div> <label for="city">City:<br /> </label> <input type="text" size="30" name="city" class="txt" id="city" /> </div> <div> <label for="state">State:<br /> </label> <input type="text" size="30" name="state" class="txt" id="state" /> </div> <div> <label for="zip">Zip:<br /> </label> <input type="text" size="30" name="zip" class="txt" id="zip" /> </div> <div> <label for="label">Phone:<br /> </label> <input type="text" size="30" name="phone" class="txt" id="label" /> </div> <div> <label for="email">Email:<br /> </label> <input type="text" size="30" name="email" class="txt" id="email" /> </div> <p><br /> <b>Insurace Information</b></p> <p><i>Auto Glass Plus in an Approved Insurance Vendor. Insurance claims require additional information that we will request when we contact you for your quote.</i></p> <br /> <div> <input type="checkbox" name="insurance" value="yes" /> Check here if this is an insurance claim.<br /> <label for="year">Insurance Provider:<br /> </label> <input type="text" size="30" name="provider" class="txt" id="provider" /> </div> <p><br /> <b>Vehicle Information:</b><br /> </p> <div> <label for="year">Vehicle Year :<br /> </label> <input type="text" size="30" name="year" class="txt" id="year" /> </div> <div> <label for="make">Make: </label> <br /> <input type="text" size="30" name="make" class="txt" id="make" /> </div> <div> <label for="model">Model:</label> <br /> <input type="text" size="30" name="model" class="txt" id="model" /> </div> <div> <label for="body">Body Type:<br /> </label> <select name="body" id="body"> <option>Select One</option> <option value="2 Door Hatchback">2 Door Hatchback</option> <option value="4 Door Hatchback">4 Door Hatchback</option> <option value="2 Door Sedan">2 Door Sedan</option> <option value="4 Door Sedan">4 Door Sedan</option> <option value="Station Wagon">Station Wagon</option> <option value="Van">Van</option> <option value="Sport Utility">Sport Utility</option> <option value="Pickup Truck">Pickup Truck</option> <option value="Other Truck">Other Truck</option> <option value="Recreational Vehicle">Recreational Vehicle</option> <option value="Other">Other</option> </select> </div> <p><b><br /> Glass in Need of Repair:</b><br /> </p> <div> <input type="checkbox" name="repairs" value="Windshield" /> Windshield<br /> <input type="checkbox" name="repairs" value="Back Glass" /> Back Glass<br /> <input type="checkbox" name="repairs" value="Driver&rsquo;s Side Window" /> Side Window*<br /> <input type="checkbox" name="repairs" value="Chip Repair" /> Chip Repair<br /> <input type="checkbox" name="repairs" value="Other" /> Other </div> <p><strong>*Important:</strong> For side glass, please indicate the specific window that needs replacement <i>(e.g. passenger side rear door or driver side vent glass)</i>, and any tinting color preference in the <strong>Describe Damage </strong> field.</p> <p><br /> <b>Describe Damage</b></p> <div> <textarea rows="6" name="damage" id="damage" cols="37" class="txt"></textarea> </div> <input type="hidden" name="thanks" value="../thanks.php" /> <input type="hidden" name="required_fields" value="firstname, lastname, email, phone" /> <input type="hidden" name="html_template" value="testform.tpl.html" /> <input type="hidden" name="mail_template" value="testmail.tpl.txt" /> <div class="submit"> <center> <input type="submit" value="Submit Form" name="Submit" id="Submit" /> </center> </div> </fieldset> </form> Then it sends to a file named index.php inside the "form" folder: <?php $script_root = './'; $referring_server = 'www.wmsgroup.com, wmsgroup.com, scripts'; $allow_empty_referer = 'yes'; // (yes, no) $language = 'en'; // (see folder 'languages') $ip_banlist = ''; $ip_address_count = '0'; $ip_address_duration = '48'; $show_limit_errors = 'yes'; // (yes, no) $log_messages = 'no'; // (yes, no) $text_wrap = '65'; $show_error_messages = 'yes'; $attachment = 'no'; // (yes, no) $attachment_files = 'jpg, gif,png, zip, txt, pdf, doc, ppt, tif, bmp, mdb, xls, txt'; $attachment_size = 100000; $path['logfile'] = $script_root . 'logfile/logfile.txt'; $path['upload'] = $script_root . 'upload/'; // chmod 777 upload $path['templates'] = $script_root . 'templates/'; $file['default_html'] = 'testform.tpl.html'; $file['default_mail'] = 'testmail.tpl.txt'; /***************************************************** ** Add further words, text, variables and stuff ** that you want to appear in the templates here. ** The values are displayed in the HTML output and ** the e-mail. *****************************************************/ $add_text = array( 'txt_additional' => 'Additional', // {txt_additional} 'txt_more' => 'More' // {txt_more} ); /***************************************************** ** Do not edit below this line - Ende der Einstellungen *****************************************************/ /***************************************************** ** Send safety signal to included files *****************************************************/ define('IN_SCRIPT', 'true'); /***************************************************** ** Load formmail script code *****************************************************/ include($script_root . 'inc/formmail.inc.php') ?> There is also formail.inc.php, testform.tpl.php, testform.tpl.text and then the confirmation page, thanks.php I want to know how these all work together and what the problem could be.

    Read the article

  • IComparable not included when serializing in WCF

    - by djerry
    Hey guys, I have a list i'm filling at server side. It's a list of "User", which implements IComparable. Now when WCF is serializing the data, i guess it's not including the CompareTo method. This is my Object class : [DataContract] public class User : IComparable { private string e164, cn, h323; private int id; private DateTime lastActive; [DataMember] public DateTime LastActive { get { return lastActive; } set { laatstActief = value; } } [DataMember] public int Id { get { return id; } set { id = value; } } [DataMember] public string H323 { get { return h323; } set { h323 = value; } } [DataMember] public string Cn { get { return cn; } set { cn = value; } } [DataMember] public string E164 { get { return e164; } set { e164 = value; } } public User() { } public User(string e164, string cn, string h323, DateTime lastActive) { this.E164 = e164; this.Cn = cn; this.H323 = h323; this.LastActive= lastActive; } [DataMember] public string ToStringExtra { get { if (h323 != "/" && h323 != "") return h323 + " (" + e164 + ")"; return e164; } set { ;} } public override string ToString() { if (Cn.Equals("Trunk Line") || Cn.Equals("")) if (h323.Equals("")) return E164; else return h323; return Cn; } public int CompareTo(object obj) { User user = (User)obj; return user.LastActive.CompareTo(this.LastActive); } } Is it possible to get the CompareTo method to reach the client? Putting [DataMember] isn't the solution as i tried it ( i know...). Thanks in advance.

    Read the article

  • Why does my Doctrine DBAL query return no results when quoted?

    - by braveterry
    I'm using the Doctrine DataBase Abstraction Layer (DBAL) to perform some queries. For some reason, when I quote a parameter before passing it to the query, I get back no rows. When I pass it unquoted, it works fine. Here's the relevant snippet of code I'm using: public function get($game) { load::helper('doctrinehelper'); $conn = doctrinehelper::getconnection(); $statement = $conn->prepare('SELECT games.id as id, games.name as name, games.link_url, games.link_text, services.name as service_name, image_url FROM games, services WHERE games.name = ? AND services.key = games.service_key'); $quotedGame = $conn->quote($game); load::helper('loghelper'); $logger = loghelper::getLogger(); $logger->debug("Quoted Game: $quotedGame"); $logger->debug("Unquoted Game: $game"); $statement->execute(array($quotedGame)); $resultsArray = $statement->fetchAll(); $logger->debug("Number of rows returned: " . count($resultsArray)); return $resultsArray; } Here's what the log shows: 01/01/11 17:00:13,269 [2112] DEBUG root - Quoted Game: 'Diablo II Lord of Destruction' 01/01/11 17:00:13,269 [2112] DEBUG root - Unquoted Game: Diablo II Lord of Destruction 01/01/11 17:00:13,270 [2112] DEBUG root - Number of rows returned: 0 If I change this line: $statement->execute(array($quotedGame)); to this: $statement->execute(array($game)); I get this in the log: 01/01/11 16:51:42,934 [2112] DEBUG root - Quoted Game: 'Diablo II Lord of Destruction' 01/01/11 16:51:42,935 [2112] DEBUG root - Unquoted Game: Diablo II Lord of Destruction 01/01/11 16:51:42,936 [2112] DEBUG root - Number of rows returned: 1 Have I fat-fingered something?

    Read the article

  • Flash External Interface issue with Firefox

    - by majestiq
    I am having a hard time getting ExternalInterface to work on Firefox. I am trying to call a AS3 function from javascript. The SWF is setup with the right callbacks and it is working in IE. I am using AC_RunActiveContent.js to embed the swf into my page. However, I have modified it to add an ID to the Object / Embed Tags. Below are object and embed tag that are generated for IE and for Firefox respectively. <object codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0" width="400" height="400" align="middle" id="jpeg_encoder2" name="jpeg_encoder3" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" > <param name="movie" value="/jpeg_encoder/jpeg_encoder3.swf" /> <param name="quality" value="high" /> <param name="play" value="true" /> <param name="loop" value="true" /> <param name="scale" value="showall" /> <param name="wmode" value="window" /> <param name="devicefont" value="false" /> <param name="bgcolor" value="#ffffff" /> <param name="menu" value="false" /> <param name="allowFullScreen" value="false" /> <param name="allowScriptAccess" value="always" /> </object> <embed width="400" height="400" src="/jpeg_encoder/jpeg_encoder3.swf" quality="high" pluginspage="http://www.macromedia.com/go/getflashplayer" align="middle" play="true" loop="true" scale="showall" wmode="window" devicefont="false" id="jpeg_encoder2" bgcolor="#ffffff" name="jpeg_encoder3" menu="false" allowFullScreen="false" allowScriptAccess="always" type="application/x-shockwave-flash" > </embed> I am calling the function like this... <script> try { document.getElementById('jpeg_encoder2').processImage(z); } catch (e) { alert(e.message); } </script> In Firefox, I get an error saying "document.getElementById("jpeg_encoder2").processImage is not a function" Any Ideas?

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >