Search Results

Search found 24117 results on 965 pages for 'write'.

Page 531/965 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • Hiding Extended program check errors for Includes in ABAP

    - by Esti
    How can I stop the ABAP extended program check (SLIN) from reporting errors in include libraries that I may not have write access to? I like to leave the extended check with as few errors & warnings as possible, usually when I intentionally use something in a way that may cause a warning, I use the pseudo comments ("#EC * etc) to hide the message. This also tells the next programmer that I at least thought about the possible consequences of using something in a particular way. When these statements are in includes that I have no control over, I would like to hide the messages without changing the offending libraries/includes.

    Read the article

  • Sending Client Certificate in HttpWebRequest

    - by Aaron Fischer
    I am trying to pass a client certificate to a server using the code below however I still revive the HTTP Error 403.7 - Forbidden: SSL client certificate is required. What are the possible reasons the HttpWebRequest would not send the client certificate? var clientCertificate = new X509Certificate2( @"C:\Development\TestClient.pfx", "bob" ); HttpWebRequest tRequest = ( HttpWebRequest )WebRequest.Create( "https://ofxtest.com/ofxr.dll" ); tRequest.ClientCertificates.Add( clientCertificate ); tRequest.PreAuthenticate = true; tRequest.KeepAlive = true; tRequest.Credentials = CredentialCache.DefaultCredentials; tRequest.Method = "POST"; var encoder = new ASCIIEncoding(); var requestData = encoder.GetBytes( "<OFX></OFX>" ); tRequest.GetRequestStream().Write( requestData, 0, requestData.Length ); tRequest.GetRequestStream().Close(); ServicePointManager.ServerCertificateValidationCallback = new System.Net.Security.RemoteCertificateValidationCallback( CertPolicy.ValidateServerCertificate ); WriteResponse( tRequest.GetResponse() );

    Read the article

  • Yet Another ASP.NET MVC CRUD Tutorial

    - by Ricardo Peres
    I know that I have not posted much on MVC, mostly because I don’t use it on my daily life, but since I find it so interesting, and since it is gaining such popularity, I will be talking about it much more. This time, it’s about the most basic of scenarios: CRUD. Although there are several ASP.NET MVC tutorials out there that cover ordinary CRUD operations, I couldn’t find any that would explain how we can have also AJAX, optimistic concurrency control and validation, using Entity Framework Code First, so I set out to write one! I won’t go into explaining what is MVC, Code First or optimistic concurrency control, or AJAX, I assume you are all familiar with these concepts by now. Let’s consider an hypothetical use case, products. For simplicity, we only want to be able to either view a single product or edit this product. First, we need our model: 1: public class Product 2: { 3: public Product() 4: { 5: this.Details = new HashSet<OrderDetail>(); 6: } 7:  8: [Required] 9: [StringLength(50)] 10: public String Name 11: { 12: get; 13: set; 14: } 15:  16: [Key] 17: [ScaffoldColumn(false)] 18: [DatabaseGenerated(DatabaseGeneratedOption.Identity)] 19: public Int32 ProductId 20: { 21: get; 22: set; 23: } 24:  25: [Required] 26: [Range(1, 100)] 27: public Decimal Price 28: { 29: get; 30: set; 31: } 32:  33: public virtual ISet<OrderDetail> Details 34: { 35: get; 36: protected set; 37: } 38:  39: [Timestamp] 40: [ScaffoldColumn(false)] 41: public Byte[] RowVersion 42: { 43: get; 44: set; 45: } 46: } Keep in mind that this is a simple scenario. Let’s see what we have: A class Product, that maps to a product record on the database; A product has a required (RequiredAttribute) Name property which can contain up to 50 characters (StringLengthAttribute); The product’s Price must be a decimal value between 1 and 100 (RangeAttribute); It contains a set of order details, for each time that it has been ordered, which we will not talk about (Details); The record’s primary key (mapped to property ProductId) comes from a SQL Server IDENTITY column generated by the database (KeyAttribute, DatabaseGeneratedAttribute); The table uses a SQL Server ROWVERSION (previously known as TIMESTAMP) column for optimistic concurrency control mapped to property RowVersion (TimestampAttribute). Then we will need a controller for viewing product details, which will located on folder ~/Controllers under the name ProductController: 1: public class ProductController : Controller 2: { 3: [HttpGet] 4: public ViewResult Get(Int32 id = 0) 5: { 6: if (id != 0) 7: { 8: using (ProductContext ctx = new ProductContext()) 9: { 10: return (this.View("Single", ctx.Products.Find(id) ?? new Product())); 11: } 12: } 13: else 14: { 15: return (this.View("Single", new Product())); 16: } 17: } 18: } If the requested product does not exist, or one was not requested at all, one with default values will be returned. I am using a view named Single to display the product’s details, more on that later. As you can see, it delegates the loading of products to an Entity Framework context, which is defined as: 1: public class ProductContext: DbContext 2: { 3: public DbSet<Product> Products 4: { 5: get; 6: set; 7: } 8: } Like I said before, I’ll keep it simple for now, only aggregate root Product is available. The controller will use the standard routes defined by the Visual Studio ASP.NET MVC 3 template: 1: routes.MapRoute( 2: "Default", // Route name 3: "{controller}/{action}/{id}", // URL with parameters 4: new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults 5: ); Next, we need a view for displaying the product details, let’s call it Single, and have it located under ~/Views/Product: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <!DOCTYPE html> 3:  4: <html> 5: <head runat="server"> 6: <title>Product</title> 7: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: } 6:  7: function onComplete(ctx) 8: { 9: } 10:  11: </script> 8: </head> 9: <body> 10: <div> 11: <% 1: : this.Html.ValidationSummary(false) %> 12: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 13: <% 1: : this.Html.EditorForModel() %> 14: <input type="submit" name="submit" value="Submit" /> 15: <% 1: } %> 16: </div> 17: </body> 18: </html> Yes… I am using ASPX syntax… sorry about that!   I implemented an editor template for the Product class, which must be located on the ~/Views/Shared/EditorTemplates folder as file Product.ascx: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<Product>" %> 2: <div> 3: <%: this.Html.HiddenFor(model => model.ProductId) %> 4: <%: this.Html.HiddenFor(model => model.RowVersion) %> 5: <fieldset> 6: <legend>Product</legend> 7: <div class="editor-label"> 8: <%: this.Html.LabelFor(model => model.Name) %> 9: </div> 10: <div class="editor-field"> 11: <%: this.Html.TextBoxFor(model => model.Name) %> 12: <%: this.Html.ValidationMessageFor(model => model.Name) %> 13: </div> 14: <div class="editor-label"> 15: <%= this.Html.LabelFor(model => model.Price) %> 16: </div> 17: <div class="editor-field"> 18: <%= this.Html.TextBoxFor(model => model.Price) %> 19: <%: this.Html.ValidationMessageFor(model => model.Price) %> 20: </div> 21: </fieldset> 22: </div> One thing you’ll notice is, I am including both the ProductId and the RowVersion properties as hidden fields; they will come handy later or, so that we know what product and version we are editing. The other thing is the included JavaScript files: jQuery, jQuery UI and unobtrusive validations. Also, I am not using the Content extension method for translating relative URLs, because that way I would lose JavaScript intellisense for jQuery functions. OK, so, at this moment, I want to add support for AJAX and optimistic concurrency control. So I write a controller method like this: 1: [HttpPost] 2: [AjaxOnly] 3: [Authorize] 4: public JsonResult Edit(Product product) 5: { 6: if (this.TryValidateModel(product) == true) 7: { 8: using (BlogContext ctx = new BlogContext()) 9: { 10: Boolean success = false; 11:  12: ctx.Entry(product).State = (product.ProductId == 0) ? EntityState.Added : EntityState.Modified; 13:  14: try 15: { 16: success = (ctx.SaveChanges() == 1); 17: } 18: catch (DbUpdateConcurrencyException) 19: { 20: ctx.Entry(product).Reload(); 21: } 22:  23: return (this.Json(new { Success = success, ProductId = product.ProductId, RowVersion = Convert.ToBase64String(product.RowVersion) })); 24: } 25: } 26: else 27: { 28: return (this.Json(new { Success = false, ProductId = 0, RowVersion = String.Empty })); 29: } 30: } So, this method is only valid for HTTP POST requests (HttpPost), coming from AJAX (AjaxOnly, from MVC Futures), and from authenticated users (Authorize). It returns a JSON object, which is what you would normally use for AJAX requests, containing three properties: Success: a boolean flag; RowVersion: the current version of the ROWVERSION column as a Base-64 string; ProductId: the inserted product id, as coming from the database. If the product is new, it will be inserted into the database, and its primary key will be returned into the ProductId property. Success will be set to true; If a DbUpdateConcurrencyException occurs, it means that the value in the RowVersion property does not match the current ROWVERSION column value on the database, so the record must have been modified between the time that the page was loaded and the time we attempted to save the product. In this case, the controller just gets the new value from the database and returns it in the JSON object; Success will be false. Otherwise, it will be updated, and Success, ProductId and RowVersion will all have their values set accordingly. So let’s see how we can react to these situations on the client side. Specifically, we want to deal with these situations: The user is not logged in when the update/create request is made, perhaps the cookie expired; The optimistic concurrency check failed; All went well. So, let’s change our view: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <%@ Import Namespace="System.Web.Security" %> 3:  4: <!DOCTYPE html> 5:  6: <html> 7: <head runat="server"> 8: <title>Product</title> 9: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: window.alert('An error occurred: ' + error); 6: } 7:  8: function onSuccess(ctx) 9: { 10: if (typeof (ctx.Success) != 'undefined') 11: { 12: $('input#ProductId').val(ctx.ProductId); 13: $('input#RowVersion').val(ctx.RowVersion); 14:  15: if (ctx.Success == false) 16: { 17: window.alert('An error occurred while updating the entity: it may have been modified by third parties. Please try again.'); 18: } 19: else 20: { 21: window.alert('Saved successfully'); 22: } 23: } 24: else 25: { 26: if (window.confirm('Not logged in. Login now?') == true) 27: { 28: document.location.href = '<%: FormsAuthentication.LoginUrl %>?ReturnURL=' + document.location.pathname; 29: } 30: } 31: } 32:  33: </script> 10: </head> 11: <body> 12: <div> 13: <% 1: : this.Html.ValidationSummary(false) %> 14: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 15: <% 1: : this.Html.EditorForModel() %> 16: <input type="submit" name="submit" value="Submit" /> 17: <% 1: } %> 18: </div> 19: </body> 20: </html> The implementation of the onSuccess function first checks if the response contains a Success property, if not, the most likely cause is the request was redirected to the login page (using Forms Authentication), because it wasn’t authenticated, so we navigate there as well, keeping the reference to the current page. It then saves the current values of the ProductId and RowVersion properties to their respective hidden fields. They will be sent on each successive post and will be used in determining if the request is for adding a new product or to updating an existing one. The only thing missing is the ability to insert a new product, after inserting/editing an existing one, which can be easily achieved using this snippet: 1: <input type="button" value="New" onclick="$('input#ProductId').val('');$('input#RowVersion').val('');"/> And that’s it.

    Read the article

  • How to use EMF to read XML file?

    - by zengr
    I have to use EMF in one of my class projects. I am trying to understand how to use EMF to do the following: Read XML, Get the values into objects. Use ORM to persist the values in objects to database. - Done Get data from database using ORM and generate XML. I need to do all of that using: EMF (no idea what so ever) and JPA (DONE). I have used JAXB and I know, this can be done using JAXB, but how is (EMF == JAXB)?! I have created many java classes using EMF, but there are so many of them! Where do I implement the read/write methods and how do I run the EMF project? Thanks

    Read the article

  • On MacOSX, in a C++ program, what guarantees can I have on file IO

    - by anon
    I am on MacOSX. I am writing a multi threaded program. One thread does logging. The non-logging threads may crash at any time. What conventions should I adopt in the logger / what guarantees can I have? I would prefer a solution where even if I crash during part of a write, previous writes still go to disk, and when reading back the log, I can figure out "ah, I wrote 100 complete enties, then I crashed on the 101th". Thanks!

    Read the article

  • JPG segment length encoding

    - by Blorgbeard
    I'm trying to write some code to extract Exif information from a JPG. Exif is stored in the APP1 segment of a JPG file. According to the Exif spec, the format of the APP1 segment is supposed to start like this: FF E1 // APP1 segment marker nn nn // Length of segment 45 // 'E' 78 // 'x' 69 // 'i' 66 // 'f' And it goes until there is an FF followed by something other than FF or 00. Looking at a JPG in a hex editor, I can see FF E1 and the Exif string, but I'm having trouble decoding the length bytes. An example: In one jpg, my hex editor tells me the APP1 segment is 686 bytes long, but the length bytes are F7 C8. How should I use those bytes to come up with 686 decimal?

    Read the article

  • How to run assembly using NUnit's TestSuiteBuilder

    - by Dror Helper
    I'm trying to write a simple method that receives a file an runs it using NUnit. The code I managed to build using NUnit's source does not work: if(openFileDialog1.ShowDialog() != DialogResult.OK) { return; } var builder = new TestSuiteBuilder(); var testPackage = new TestPackage(openFileDialog1.FileName); var directoryName = Path.GetDirectoryName(openFileDialog1.FileName); testPackage.BasePath = directoryName; var suite = builder.Build(testPackage); TestResult result = suite.Run(new NullListener(), TestFilter.Empty); The problem is that I keep getting an exception thrown by builder.Build stating that the assembly was not found. What am I missing?

    Read the article

  • How to generate and print large XPS documents in WPF?

    - by bitbonk
    I would like to generate (and then print or save) big XPS documents (400 pages) from my WPF application. We have some large amount of in-memory data that needs to be written to XPS. How can this be done without getting an OutOfMemoryException? Is there a way I can write the document in chunks? How is this usually done? Should I not be using XPS for large files in the first place? The root cause of the OutOfMemoryException seems to be the creation of the huge FlowDocument. I am creating the full FlowDocument and then sending it to the XPS document writer. Is this the wrong approach?

    Read the article

  • How to convert XML to JSON in Python?

    - by Geuis
    I'm doing some work on App Engine and I need to convert an XML document being retrieved from a remote server into an equivalent JSON object. I'm using xml.dom.minidom to parse the XML data being returned by urlfetch. I'm also trying to use django.utils.simplejson to convert the parsed XML document into JSON. I'm completely at a loss as to how to hook the two together. Below is the code I more or less have been tinkering with. If anyone can put A & B together, I would be SO greatful. I'm freaking lost. from xml.dom import minidom from django.utils import simplejson as json #pseudo code that returns actual xml data as a string from remote server. result = urlfetch.fetch(url,'','get'); dom = minidom.parseString(result.content) json = simplejson.load(dom) self.response.out.write(json)

    Read the article

  • C# iterator for async file copy

    - by uno
    Been running in circles to find the best solution for my client. We have a server that images are uploaded via ftp. I want to write an application that scans this server at frequent intervals and if it finds files it copies them to another,processing server. So if during a time cycle, my app finds that there are 100 files, i want to start copying as many files as i can across to the processing server i figured delegates would be the way to go but now i come across iterators...what do the experts say?

    Read the article

  • Removing Left Recursion in ANTLR

    - by prosseek
    As is explained in http://stackoverflow.com/questions/2652060/removing-left-recursion , there are two ways to remove the left recursion. Modify the original grammar to remove the left recursion using some procedure Write the grammar originally not to have the left recursion What people normally use for removing (not having) the left recursion with ANTLR? I've used flex/bison for parser, but I need to use ANTLR. The only thing I'm concerned about using ANTLR (or LL parser in genearal) is left recursion removal. In practical sense, how serious of removing left recursion in ANTLR? Is this a showstopper in using ANTLR? Or, nobody cares about it in ANTLR community? I like the idea of AST generation of ANTLR. In terms of getting AST quick and easy way, which method (out of the 2 removing left recursion methods) is preferable?

    Read the article

  • How can I extend DynamicQuery.cs to implement a .Single method?

    - by Yoenhofen
    I need to write some dynamic queries for a project I'm working on. I'm finding out that a significant amount of time is being spent by my program on the Count and First methods, so I started to change to .Single, only to find out that there is no such method. The code below was my first attempt at creating one (mostly copied from the Where method), but it's not working. Help? public static object Single(this IQueryable source, string predicate, params object[] values) { if (source == null) throw new ArgumentNullException("source"); if (predicate == null) throw new ArgumentNullException("predicate"); LambdaExpression lambda = DynamicExpression.ParseLambda(source.ElementType, typeof(bool), predicate, values); return source.Provider.CreateQuery( Expression.Call( typeof(Queryable), "Single", new Type[] { source.ElementType }, source.Expression, Expression.Quote(lambda))); }

    Read the article

  • PHP MySQLi timeout not working

    - by Marcin
    Hi guys I have a weird problem with mysqli timeout options, here you go: I am using mysqli_init() and real_connect() in order to set MYSQLI_OPT_CONNECT_TIMEOUT $this->__mysqli = mysqli_init(); if(!$this->__mysqli->options(MYSQLI_OPT_CONNECT_TIMEOUT,1)) throw new Exception('Timeout settings failed') $this->__mysqli->real_connect(host,user,pass,db); .... Then I am initiating query on locked table (LOCKE TABLE users WRITE) and its just hanging, ignoring all my settings even: set_time_limit(1); ini_set('max_execution_time',1); ini_set('default_socket_timeout',1); ini_set('mysql.connect_timeout',1); I understand why set_time_limit(1) and max_execution_time is ignored but why other timeouts and especially MYSQLI_OPT_CONNECT_TIMEOUT are ignored and how to solve it. I am using PHP 5.3.1 on Windows and Linux boxes, please help.

    Read the article

  • UITextView autoscroll to last line

    - by MihaiD
    Hi *, When writing in a UITextView more text than can fit entirely inside it, the text will scroll up and the cursor will often place itself one or two lines above the view's bottom line. This is a bit frustrating as I want my application to make good use of the entire height of the text view. Basically what I want is to configure the UITextView to write up to it's lowest part and not use it just for scrolling. I've seen some similar questions here, here and here. However I've not seen a proper solution yet. Thanks

    Read the article

  • JavaScript - get detailed information about the browser

    - by iconiK
    Basically I'm looking for something to give me easy access to information like useragentstring.com, but in JS, without me parsing the user agent and looking for each possible bit of text. The object could be something like this: browser = UserAgent.Browser; // Chrome browserVer = UserAgent.BrowserVersion; // 5.0.342.9 os = UserAgent.OperatingSystem; // Windows NT osVer = UserAgent.OperatingSystemVersion; // 6.1 layoutEng = UserAgent.LayoutEngine; // WebKit layoutEngVer = UserAgent.LayoutEngineVersion; // 533.2 Does something similar to that exist or do I have to write one myself? Writing yet another user agent parser doesn't seem that easy with all those impersonations going back to the dark ages of the web. Specifically I'm looking for something that doesn't just split the user agent into parts and give them to me, because that's as useless as the user agent itself; instead it should parse the user agent and recognize the engine, browser, OS, etc. and return the concrete parts only, as in the example.

    Read the article

  • json support in visual studio 2010

    - by jnsohnumr
    Hi, I'm trying to work on a new project parsing some JSON data for a Silverlight 4 project (specifically created as a "Silverlight Business Application - Visual C#" project) using C# in Visual Studio 2010, and I can't find how to include the references to have parsers and native object support for JSON data. As far as I know my development tools are up to date (checked MS update). I know that I can probably just write my own parser but that seems like re-inventing the wheel. Below are some lines that work in VS 2008 in another project of ours (can't post the files due to their being part of a business app): using System.Json; results = (JsonObject)JsonObject.Load(e.Result); I hope my description is adequate. Thanks for looking, jnsohnumr

    Read the article

  • How to reload Ext.tree.TreePanel on demand?

    - by Sergei Stolyarov
    I want to create Ext.tree.TreePanel component and periodically load content from the external URl. So I've written something like new Ext.tree.TreePanel({ root: { nodeType: 'async', text: 'asdasd', draggable: false, id: 'folders-tree-root' }, loader: new Ext.tree.TreeLoader() }); And now I want to reload this tree, so I write: tree.loader.dataUrl = 'folders-sample.json'; tree.root.reload(); And nothing happens. add: The only way I've found is set some invalid value to dataUrl param on TreeLoader creation: new Ext.tree.TreePanel({ root: { nodeType: 'async', text: 'asdasd', draggable: false, id: 'folders-tree-root' }, loader: new Ext.tree.TreeLoader(dataUrl: 'something') });

    Read the article

  • Roll up project-level tasks to the project collection portal in TFS2010

    - by adam.mokan
    I have a Project Collection setup in my TFS2010RC deployment. I have two Projects setup under this collection with their own task lists, which are populated with data. I fully expected the tasks from these individual projects to "roll up" and appear in the task list at the Project Collection level, but they do not. The Project Collection task list is empty. Basically, I'm looking to provide a view so a supervisor could see all tasks across projects quickly and easily. I'm sure I could write a reporting services report, but it seems like this is something so basic that it would have been included and it just need to be turned on or something. I'm sure I'm probably missing something really simple here. Thanks.

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Cocoa structs and NSMutableArray

    - by Circle
    I have an NSMutableArray that I'm trying to store and access some structs. How do I do this? 'addObject' gives me an error saying "Incompatible type for argument 1 of addObject". Here is an example ('in' is a NSFileHandle, 'array' is the NSMutableArray): //Write points for(int i=0; i<5; i++) { struct Point p; buff = [in readDataOfLength:1]; [buff getBytes:&(p.x) length:sizeof(p.x)]; [array addObject:p]; } //Read points for(int i=0; i<5; i++) { struct Point p = [array objectAtIndex:i]; NSLog(@"%i", p.x); }

    Read the article

  • Should I suppress CA1062: Validate arguments of public methods?

    - by brickner
    I've recently upgraded my project to Visual Studio 2010 from Visual Studio 2008. In Visual Studio 2008, this Code Analysis rule doesn't exist. Now I'm not sure if I should use this rule or not. I'm building an open source library so it seems important to keep people safe from doing mistakes. However, if all I'm going to do is throw ArgumentNullException when the parameter is null, it seems like writing useless code since ArgumentNullException will be thrown even if I won't write that code. Should I remove that rule or fix the violations?

    Read the article

  • HttpWebResponse with MJPEG and multipart/x-mixed-replace; boundary=--myboundary response content typ

    - by arri.me
    I have an ASP.NET application that I need to show a video feed from a security camera. The video feed has a content type of 'multipart/x-mixed-replace; boundary=--myboundary' with the image data between the boundaries. I need assistance with passing that stream of data through to my page so that the client side plugin I have can consume the stream just as it would if I browsed to the camera's web interface directly. The following code does not work: //Get response data byte[] data = HtmlParser.GetByteArrayFromStream(response.GetResponseStream()); if (data != null) { HttpContext.Current.Response.OutputStream.Write(data, 0, data.Length); } return;

    Read the article

  • How do I use local memory in OpenCL?

    - by splicer
    I've been playing with OpenCL recently, and I'm able to write simple kernels that use only global memory. Now I'd like to start using local memory, but I can't seem to figure out how to use get_local_size() and get_local_id() to compute one "chunk" of output at a time. For example, let's say I wanted to convert Apple's OpenCL Hello World example kernel to something the uses local memory. How would you do it? Here's the original kernel source: __kernel square( __global float *input, __global float *output, const unsigned int count) { int i = get_global_id(0); if (i < count) output[i] = input[i] * input[i]; } If this example can't easily be converted into something that shows how to make use of local memory, any other simple example will do. Thanks!

    Read the article

  • Scheme homework help

    - by John
    Okay so I have started a new language in class. We are learning Scheme and i'm not sure on how to do it. When I say learning a new language, I mean thrown a homework and told to figure it out. A couple of them have got me stumped. My first problem is: Write a Scheme function that returns true (the Boolean constant #t) ifthe parameter is a list containing n a's followed by n b's. and false otherwise. Here's what I have right now: (define aequalb (lamda (list) (let ((head (car list)) (tail(cdr list))) (if(= 'a head) ((let ((count (count + 1))) (let (( newTail (aequalb tail)))) #f (if(= 'b head) ((let ((count (count - 1))) (let (( newTail (aequalb tail)))) #f (if(null? tail) (if(= count 0) #t #f))))))))))) I know this is completely wrong, but I've been trying so please take it easy on me. Any help would be much appreciated.

    Read the article

  • Complex reporting on subversion (possibly Export Subversion log into database for reporting)

    - by James A. N. Stauffer
    What is the best way to do complex reporting on subversion logs like the following for each file? file, directory, last revision date, previous revision date(where revision date is at least 30 older than last), days diff(between revision dates) Since Subversion allows on revision to change multiple files I assume svn log needs to be run against each file individually. Ideas (that don't seem very good): Shell scripting to produce a csv file to be imported to a DB. The following is a start but doesn't show the filename: find . -name "." -print | xargs -l svn log -l 2 Shell scripting to produce XML and then use XSLT to create CSV to import to a DB. It might use a similar command to above but would still have some of the same limitation. Write a program to just parse the log on the whole directory tree, make one insert to DB per revision/file combination, and then query the DB.

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >