Search Results

Search found 28249 results on 1130 pages for 'sql injection'.

Page 250/1130 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design

    - by SeanMcAlinden
    Creating a dynamic proxy generator – Part 1 – Creating the Assembly builder, Module builder and caching mechanism For the latest code go to http://rapidioc.codeplex.com/ Before getting too involved in generating the proxy, I thought it would be worth while going through the intended design, this is important as the next step is to start creating the constructors for the proxy. Each proxy derives from a specified type The proxy has a corresponding constructor for each of the base type constructors The proxy has overrides for all methods and properties marked as Virtual on the base type For each overridden method, there is also a private method whose sole job is to call the base method. For each overridden method, a delegate is created whose sole job is to call the private method that calls the base method. The following class diagram shows the main classes and interfaces involved in the interception process. I’ll go through each of them to explain their place in the overall proxy.   IProxy Interface The proxy implements the IProxy interface for the sole purpose of adding custom interceptors. This allows the created proxy interface to be cast as an IProxy and then simply add Interceptors by calling it’s AddInterceptor method. This is done internally within the proxy building process so the consumer of the API doesn’t need knowledge of this. IInterceptor Interface The IInterceptor interface has one method: Handle. The handle method accepts a IMethodInvocation parameter which contains methods and data for handling method interception. Multiple classes that implement this interface can be added to the proxy. Each method override in the proxy calls the handle method rather than simply calling the base method. How the proxy fully works will be explained in the next section MethodInvocation. IMethodInvocation Interface & MethodInvocation class The MethodInvocation will contain one main method and multiple helper properties. Continue Method The method Continue() has two functions hidden away from the consumer. When Continue is called, if there are multiple Interceptors, the next Interceptors Handle method is called. If all Interceptors Handle methods have been called, the Continue method then calls the base class method. Properties The MethodInvocation will contain multiple helper properties including at least the following: Method Name (Read Only) Method Arguments (Read and Write) Method Argument Types (Read Only) Method Result (Read and Write) – this property remains null if the method return type is void Target Object (Read Only) Return Type (Read Only) DefaultInterceptor class The DefaultInterceptor class is a simple class that implements the IInterceptor interface. Here is the code: DefaultInterceptor namespace Rapid.DynamicProxy.Interception {     /// <summary>     /// Default interceptor for the proxy.     /// </summary>     /// <typeparam name="TBase">The base type.</typeparam>     public class DefaultInterceptor<TBase> : IInterceptor<TBase> where TBase : class     {         /// <summary>         /// Handles the specified method invocation.         /// </summary>         /// <param name="methodInvocation">The method invocation.</param>         public void Handle(IMethodInvocation<TBase> methodInvocation)         {             methodInvocation.Continue();         }     } } This is automatically created in the proxy and is the first interceptor that each method override calls. It’s sole function is to ensure that if no interceptors have been added, the base method is still called. Custom Interceptor Example A consumer of the Rapid.DynamicProxy API could create an interceptor for logging when the FirstName property of the User class is set. Just for illustration, I have also wrapped a transaction around the methodInvocation.Coninue() method. This means that any overriden methods within the user class will run within a transaction scope. MyInterceptor public class MyInterceptor : IInterceptor<User<int, IRepository>> {     public void Handle(IMethodInvocation<User<int, IRepository>> methodInvocation)     {         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name seting to: " + methodInvocation.Arguments[0]);         }         using (TransactionScope scope = new TransactionScope())         {             methodInvocation.Continue();         }         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name has been set to: " + methodInvocation.Arguments[0]);         }     } } Overridden Method Example To show a taster of what the overridden methods on the proxy would look like, the setter method for the property FirstName used in the above example would look something similar to the following (this is not real code but will look similar): set_FirstName public override void set_FirstName(string value) {     set_FirstNameBaseMethodDelegate callBase =         new set_FirstNameBaseMethodDelegate(this.set_FirstNameProxyGetBaseMethod);     object[] arguments = new object[] { value };     IMethodInvocation<User<IRepository>> methodInvocation =         new MethodInvocation<User<IRepository>>(this, callBase, "set_FirstName", arguments, interceptors);          this.Interceptors[0].Handle(methodInvocation); } As you can see, a delegate instance is created which calls to a private method on the class, the private method calls the base method and would look like the following: calls base setter private void set_FirstNameProxyGetBaseMethod(string value) {     base.set_FirstName(value); } The delegate is invoked when methodInvocation.Continue() is called within an interceptor. The set_FirstName parameters are loaded into an object array. The current instance, delegate, method name and method arguments are passed into the methodInvocation constructor (there will be more data not illustrated here passed in when created including method info, return types, argument types etc.) The DefaultInterceptor’s Handle method is called with the methodInvocation instance as it’s parameter. Obviously methods can have return values, ref and out parameters etc. in these cases the generated method override body will be slightly different from above. I’ll go into more detail on these aspects as we build them. Conclusion I hope this has been useful, I can’t guarantee that the proxy will look exactly like the above, but at the moment, this is pretty much what I intend to do. Always worth downloading the code at http://rapidioc.codeplex.com/ to see the latest. There will also be some tests that you can debug through to help see what’s going on. Cheers, Sean.

    Read the article

  • Creating a dynamic proxy generator with c# – Part 4 – Calling the base method

    - by SeanMcAlinden
    Creating a dynamic proxy generator with c# – Part 1 – Creating the Assembly builder, Module builder and caching mechanism Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design Creating a dynamic proxy generator with c# – Part 3 – Creating the constructors   The plan for calling the base methods from the proxy is to create a private method for each overridden proxy method, this will allow the proxy to use a delegate to simply invoke the private method when required. Quite a few helper classes have been created to make this possible so as usual I would suggest download or viewing the code at http://rapidioc.codeplex.com/. In this post I’m just going to cover the main points for when creating methods. Getting the methods to override The first two notable methods are for getting the methods. private static MethodInfo[] GetMethodsToOverride<TBase>() where TBase : class {     return typeof(TBase).GetMethods().Where(x =>         !methodsToIgnore.Contains(x.Name) &&                              (x.Attributes & MethodAttributes.Final) == 0)         .ToArray(); } private static StringCollection GetMethodsToIgnore() {     return new StringCollection()     {         "ToString",         "GetHashCode",         "Equals",         "GetType"     }; } The GetMethodsToIgnore method string collection contains an array of methods that I don’t want to override. In the GetMethodsToOverride method, you’ll notice a binary AND which is basically saying not to include any methods marked final i.e. not virtual. Creating the MethodInfo for calling the base method This method should hopefully be fairly easy to follow, it’s only function is to create a MethodInfo which points to the correct base method, and with the correct parameters. private static MethodInfo CreateCallBaseMethodInfo<TBase>(MethodInfo method) where TBase : class {     Type[] baseMethodParameterTypes = ParameterHelper.GetParameterTypes(method, method.GetParameters());       return typeof(TBase).GetMethod(        method.Name,        BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic,        null,        baseMethodParameterTypes,        null     ); }   /// <summary> /// Get the parameter types. /// </summary> /// <param name="method">The method.</param> /// <param name="parameters">The parameters.</param> public static Type[] GetParameterTypes(MethodInfo method, ParameterInfo[] parameters) {     Type[] parameterTypesList = Type.EmptyTypes;       if (parameters.Length > 0)     {         parameterTypesList = CreateParametersList(parameters);     }     return parameterTypesList; }   Creating the new private methods for calling the base method The following method outline how I’ve created the private methods for calling the base class method. private static MethodBuilder CreateCallBaseMethodBuilder(TypeBuilder typeBuilder, MethodInfo method) {     string callBaseSuffix = "GetBaseMethod";       if (method.IsGenericMethod || method.IsGenericMethodDefinition)     {                         return MethodHelper.SetUpGenericMethod             (                 typeBuilder,                 method,                 method.Name + callBaseSuffix,                 MethodAttributes.Private | MethodAttributes.HideBySig             );     }     else     {         return MethodHelper.SetupNonGenericMethod             (                 typeBuilder,                 method,                 method.Name + callBaseSuffix,                 MethodAttributes.Private | MethodAttributes.HideBySig             );     } } The CreateCallBaseMethodBuilder is the entry point method for creating the call base method. I’ve added a suffix to the base classes method name to keep it unique. Non Generic Methods Creating a non generic method is fairly simple public static MethodBuilder SetupNonGenericMethod(     TypeBuilder typeBuilder,     MethodInfo method,     string methodName,     MethodAttributes methodAttributes) {     ParameterInfo[] parameters = method.GetParameters();       Type[] parameterTypes = ParameterHelper.GetParameterTypes(method, parameters);       Type returnType = method.ReturnType;       MethodBuilder methodBuilder = CreateMethodBuilder         (             typeBuilder,             method,             methodName,             methodAttributes,             parameterTypes,             returnType         );       ParameterHelper.SetUpParameters(parameterTypes, parameters, methodBuilder);       return methodBuilder; }   private static MethodBuilder CreateMethodBuilder (     TypeBuilder typeBuilder,     MethodInfo method,     string methodName,     MethodAttributes methodAttributes,     Type[] parameterTypes,     Type returnType ) { MethodBuilder methodBuilder = typeBuilder.DefineMethod(methodName, methodAttributes, returnType, parameterTypes); return methodBuilder; } As you can see, you simply have to declare a method builder, get the parameter types, and set the method attributes you want.   Generic Methods Creating generic methods takes a little bit more work. /// <summary> /// Sets up generic method. /// </summary> /// <param name="typeBuilder">The type builder.</param> /// <param name="method">The method.</param> /// <param name="methodName">Name of the method.</param> /// <param name="methodAttributes">The method attributes.</param> public static MethodBuilder SetUpGenericMethod     (         TypeBuilder typeBuilder,         MethodInfo method,         string methodName,         MethodAttributes methodAttributes     ) {     ParameterInfo[] parameters = method.GetParameters();       Type[] parameterTypes = ParameterHelper.GetParameterTypes(method, parameters);       MethodBuilder methodBuilder = typeBuilder.DefineMethod(methodName,         methodAttributes);       Type[] genericArguments = method.GetGenericArguments();       GenericTypeParameterBuilder[] genericTypeParameters =         GetGenericTypeParameters(methodBuilder, genericArguments);       ParameterHelper.SetUpParameterConstraints(parameterTypes, genericTypeParameters);       SetUpReturnType(method, methodBuilder, genericTypeParameters);       if (method.IsGenericMethod)     {         methodBuilder.MakeGenericMethod(genericArguments);     }       ParameterHelper.SetUpParameters(parameterTypes, parameters, methodBuilder);       return methodBuilder; }   private static GenericTypeParameterBuilder[] GetGenericTypeParameters     (         MethodBuilder methodBuilder,         Type[] genericArguments     ) {     return methodBuilder.DefineGenericParameters(GenericsHelper.GetArgumentNames(genericArguments)); }   private static void SetUpReturnType(MethodInfo method, MethodBuilder methodBuilder, GenericTypeParameterBuilder[] genericTypeParameters) {     if (method.IsGenericMethodDefinition)     {         SetUpGenericDefinitionReturnType(method, methodBuilder, genericTypeParameters);     }     else     {         methodBuilder.SetReturnType(method.ReturnType);     } }   private static void SetUpGenericDefinitionReturnType(MethodInfo method, MethodBuilder methodBuilder, GenericTypeParameterBuilder[] genericTypeParameters) {     if (method.ReturnType == null)     {         methodBuilder.SetReturnType(typeof(void));     }     else if (method.ReturnType.IsGenericType)     {         methodBuilder.SetReturnType(genericTypeParameters.Where             (x => x.Name == method.ReturnType.Name).First());     }     else     {         methodBuilder.SetReturnType(method.ReturnType);     }             } Ok, there are a few helper methods missing, basically there is way to much code to put in this post, take a look at the code at http://rapidioc.codeplex.com/ to follow it through completely. Basically though, when dealing with generics there is extra work to do in terms of getting the generic argument types setting up any generic parameter constraints setting up the return type setting up the method as a generic All of the information is easy to get via reflection from the MethodInfo.   Emitting the new private method Emitting the new private method is relatively simple as it’s only function is calling the base method and returning a result if the return type is not void. ILGenerator il = privateMethodBuilder.GetILGenerator();   EmitCallBaseMethod(method, callBaseMethod, il);   private static void EmitCallBaseMethod(MethodInfo method, MethodInfo callBaseMethod, ILGenerator il) {     int privateParameterCount = method.GetParameters().Length;       il.Emit(OpCodes.Ldarg_0);       if (privateParameterCount > 0)     {         for (int arg = 0; arg < privateParameterCount; arg++)         {             il.Emit(OpCodes.Ldarg_S, arg + 1);         }     }       il.Emit(OpCodes.Call, callBaseMethod);       il.Emit(OpCodes.Ret); } So in the main method building method, an ILGenerator is created from the method builder. The ILGenerator performs the following actions: Load the class (this) onto the stack using the hidden argument Ldarg_0. Create an argument on the stack for each of the method parameters (starting at 1 because 0 is the hidden argument) Call the base method using the Opcodes.Call code and the MethodInfo we created earlier. Call return on the method   Conclusion Now we have the private methods prepared for calling the base method, we have reached the last of the relatively easy part of the proxy building. Hopefully, it hasn’t been too hard to follow so far, there is a lot of code so I haven’t been able to post it all so please check it out at http://rapidioc.codeplex.com/. The next section should be up fairly soon, it’s going to cover creating the delegates for calling the private methods created in this post.   Kind Regards, Sean.

    Read the article

  • Creating a dynamic proxy generator with c# – Part 3 – Creating the constructors

    - by SeanMcAlinden
    Creating a dynamic proxy generator with c# – Part 1 – Creating the Assembly builder, Module builder and caching mechanism Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design For the latest code go to http://rapidioc.codeplex.com/ When building our proxy type, the first thing we need to do is build the constructors. There needs to be a corresponding constructor for each constructor on the passed in base type. We also want to create a field to store the interceptors and construct this list within each constructor. So assuming the passed in base type is a User<int, IRepository> class, were looking to generate constructor code like the following:   Default Constructor public User`2_RapidDynamicBaseProxy() {     this.interceptors = new List<IInterceptor<User<int, IRepository>>>();     DefaultInterceptor<User<int, IRepository>> item = new DefaultInterceptor<User<int, IRepository>>();     this.interceptors.Add(item); }     Parameterised Constructor public User`2_RapidDynamicBaseProxy(IRepository repository1) : base(repository1) {     this.interceptors = new List<IInterceptor<User<int, IRepository>>>();     DefaultInterceptor<User<int, IRepository>> item = new DefaultInterceptor<User<int, IRepository>>();     this.interceptors.Add(item); }   As you can see, we first populate a field on the class with a new list of the passed in base type. Construct our DefaultInterceptor class. Add the DefaultInterceptor instance to our interceptor collection. Although this seems like a relatively small task, there is a fair amount of work require to get this going. Instead of going through every line of code – please download the latest from http://rapidioc.codeplex.com/ and debug through. In this post I’m going to concentrate on explaining how it works. TypeBuilder The TypeBuilder class is the main class used to create the type. You instantiate a new TypeBuilder using the assembly module we created in part 1. /// <summary> /// Creates a type builder. /// </summary> /// <typeparam name="TBase">The type of the base class to be proxied.</typeparam> public static TypeBuilder CreateTypeBuilder<TBase>() where TBase : class {     TypeBuilder typeBuilder = DynamicModuleCache.Get.DefineType         (             CreateTypeName<TBase>(),             TypeAttributes.Class | TypeAttributes.Public,             typeof(TBase),             new Type[] { typeof(IProxy) }         );       if (typeof(TBase).IsGenericType)     {         GenericsHelper.MakeGenericType(typeof(TBase), typeBuilder);     }       return typeBuilder; }   private static string CreateTypeName<TBase>() where TBase : class {     return string.Format("{0}_RapidDynamicBaseProxy", typeof(TBase).Name); } As you can see, I’ve create a new public class derived from TBase which also implements my IProxy interface, this is used later for adding interceptors. If the base type is generic, the following GenericsHelper.MakeGenericType method is called. GenericsHelper using System; using System.Reflection.Emit; namespace Rapid.DynamicProxy.Types.Helpers {     /// <summary>     /// Helper class for generic types and methods.     /// </summary>     internal static class GenericsHelper     {         /// <summary>         /// Makes the typeBuilder a generic.         /// </summary>         /// <param name="concrete">The concrete.</param>         /// <param name="typeBuilder">The type builder.</param>         public static void MakeGenericType(Type baseType, TypeBuilder typeBuilder)         {             Type[] genericArguments = baseType.GetGenericArguments();               string[] genericArgumentNames = GetArgumentNames(genericArguments);               GenericTypeParameterBuilder[] genericTypeParameterBuilder                 = typeBuilder.DefineGenericParameters(genericArgumentNames);               typeBuilder.MakeGenericType(genericTypeParameterBuilder);         }           /// <summary>         /// Gets the argument names from an array of generic argument types.         /// </summary>         /// <param name="genericArguments">The generic arguments.</param>         public static string[] GetArgumentNames(Type[] genericArguments)         {             string[] genericArgumentNames = new string[genericArguments.Length];               for (int i = 0; i < genericArguments.Length; i++)             {                 genericArgumentNames[i] = genericArguments[i].Name;             }               return genericArgumentNames;         }     } }       As you can see, I’m getting all of the generic argument types and names, creating a GenericTypeParameterBuilder and then using the typeBuilder to make the new type generic. InterceptorsField The interceptors field will store a List<IInterceptor<TBase>>. Fields are simple made using the FieldBuilder class. The following code demonstrates how to create the interceptor field. FieldBuilder interceptorsField = typeBuilder.DefineField(     "interceptors",     typeof(System.Collections.Generic.List<>).MakeGenericType(typeof(IInterceptor<TBase>)),       FieldAttributes.Private     ); The field will now exist with the new Type although it currently has no data – we’ll deal with this in the constructor. Add method for interceptorsField To enable us to add to the interceptorsField list, we are going to utilise the Add method that already exists within the System.Collections.Generic.List class. We still however have to create the methodInfo necessary to call the add method. This can be done similar to the following: Add Interceptor Field MethodInfo addInterceptor = typeof(List<>)     .MakeGenericType(new Type[] { typeof(IInterceptor<>).MakeGenericType(typeof(TBase)) })     .GetMethod     (        "Add",        BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic,        null,        new Type[] { typeof(IInterceptor<>).MakeGenericType(typeof(TBase)) },        null     ); So we’ve create a List<IInterceptor<TBase>> type, then using the type created a method info called Add which accepts an IInterceptor<TBase>. Now in our constructor we can use this to call this.interceptors.Add(// interceptor); Building the Constructors This will be the first hard-core part of the proxy building process so I’m going to show the class and then try to explain what everything is doing. For a clear view, download the source from http://rapidioc.codeplex.com/, go to the test project and debug through the constructor building section. Anyway, here it is: DynamicConstructorBuilder using System; using System.Collections.Generic; using System.Reflection; using System.Reflection.Emit; using Rapid.DynamicProxy.Interception; using Rapid.DynamicProxy.Types.Helpers; namespace Rapid.DynamicProxy.Types.Constructors {     /// <summary>     /// Class for creating the proxy constructors.     /// </summary>     internal static class DynamicConstructorBuilder     {         /// <summary>         /// Builds the constructors.         /// </summary>         /// <typeparam name="TBase">The base type.</typeparam>         /// <param name="typeBuilder">The type builder.</param>         /// <param name="interceptorsField">The interceptors field.</param>         public static void BuildConstructors<TBase>             (                 TypeBuilder typeBuilder,                 FieldBuilder interceptorsField,                 MethodInfo addInterceptor             )             where TBase : class         {             ConstructorInfo interceptorsFieldConstructor = CreateInterceptorsFieldConstructor<TBase>();               ConstructorInfo defaultInterceptorConstructor = CreateDefaultInterceptorConstructor<TBase>();               ConstructorInfo[] constructors = typeof(TBase).GetConstructors();               foreach (ConstructorInfo constructorInfo in constructors)             {                 CreateConstructor<TBase>                     (                         typeBuilder,                         interceptorsField,                         interceptorsFieldConstructor,                         defaultInterceptorConstructor,                         addInterceptor,                         constructorInfo                     );             }         }           #region Private Methods           private static void CreateConstructor<TBase>             (                 TypeBuilder typeBuilder,                 FieldBuilder interceptorsField,                 ConstructorInfo interceptorsFieldConstructor,                 ConstructorInfo defaultInterceptorConstructor,                 MethodInfo AddDefaultInterceptor,                 ConstructorInfo constructorInfo             ) where TBase : class         {             Type[] parameterTypes = GetParameterTypes(constructorInfo);               ConstructorBuilder constructorBuilder = CreateConstructorBuilder(typeBuilder, parameterTypes);               ILGenerator cIL = constructorBuilder.GetILGenerator();               LocalBuilder defaultInterceptorMethodVariable =                 cIL.DeclareLocal(typeof(DefaultInterceptor<>).MakeGenericType(typeof(TBase)));               ConstructInterceptorsField(interceptorsField, interceptorsFieldConstructor, cIL);               ConstructDefaultInterceptor(defaultInterceptorConstructor, cIL, defaultInterceptorMethodVariable);               AddDefaultInterceptorToInterceptorsList                 (                     interceptorsField,                     AddDefaultInterceptor,                     cIL,                     defaultInterceptorMethodVariable                 );               CreateConstructor(constructorInfo, parameterTypes, cIL);         }           private static void CreateConstructor(ConstructorInfo constructorInfo, Type[] parameterTypes, ILGenerator cIL)         {             cIL.Emit(OpCodes.Ldarg_0);               if (parameterTypes.Length > 0)             {                 LoadParameterTypes(parameterTypes, cIL);             }               cIL.Emit(OpCodes.Call, constructorInfo);             cIL.Emit(OpCodes.Ret);         }           private static void LoadParameterTypes(Type[] parameterTypes, ILGenerator cIL)         {             for (int i = 1; i <= parameterTypes.Length; i++)             {                 cIL.Emit(OpCodes.Ldarg_S, i);             }         }           private static void AddDefaultInterceptorToInterceptorsList             (                 FieldBuilder interceptorsField,                 MethodInfo AddDefaultInterceptor,                 ILGenerator cIL,                 LocalBuilder defaultInterceptorMethodVariable             )         {             cIL.Emit(OpCodes.Ldarg_0);             cIL.Emit(OpCodes.Ldfld, interceptorsField);             cIL.Emit(OpCodes.Ldloc, defaultInterceptorMethodVariable);             cIL.Emit(OpCodes.Callvirt, AddDefaultInterceptor);         }           private static void ConstructDefaultInterceptor             (                 ConstructorInfo defaultInterceptorConstructor,                 ILGenerator cIL,                 LocalBuilder defaultInterceptorMethodVariable             )         {             cIL.Emit(OpCodes.Newobj, defaultInterceptorConstructor);             cIL.Emit(OpCodes.Stloc, defaultInterceptorMethodVariable);         }           private static void ConstructInterceptorsField             (                 FieldBuilder interceptorsField,                 ConstructorInfo interceptorsFieldConstructor,                 ILGenerator cIL             )         {             cIL.Emit(OpCodes.Ldarg_0);             cIL.Emit(OpCodes.Newobj, interceptorsFieldConstructor);             cIL.Emit(OpCodes.Stfld, interceptorsField);         }           private static ConstructorBuilder CreateConstructorBuilder(TypeBuilder typeBuilder, Type[] parameterTypes)         {             return typeBuilder.DefineConstructor                 (                     MethodAttributes.Public | MethodAttributes.SpecialName | MethodAttributes.RTSpecialName                     | MethodAttributes.HideBySig, CallingConventions.Standard, parameterTypes                 );         }           private static Type[] GetParameterTypes(ConstructorInfo constructorInfo)         {             ParameterInfo[] parameterInfoArray = constructorInfo.GetParameters();               Type[] parameterTypes = new Type[parameterInfoArray.Length];               for (int p = 0; p < parameterInfoArray.Length; p++)             {                 parameterTypes[p] = parameterInfoArray[p].ParameterType;             }               return parameterTypes;         }           private static ConstructorInfo CreateInterceptorsFieldConstructor<TBase>() where TBase : class         {             return ConstructorHelper.CreateGenericConstructorInfo                 (                     typeof(List<>),                     new Type[] { typeof(IInterceptor<TBase>) },                     BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic                 );         }           private static ConstructorInfo CreateDefaultInterceptorConstructor<TBase>() where TBase : class         {             return ConstructorHelper.CreateGenericConstructorInfo                 (                     typeof(DefaultInterceptor<>),                     new Type[] { typeof(TBase) },                     BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic                 );         }           #endregion     } } So, the first two tasks within the class should be fairly clear, we are creating a ConstructorInfo for the interceptorField list and a ConstructorInfo for the DefaultConstructor, this is for instantiating them in each contructor. We then using Reflection get an array of all of the constructors in the base class, we then loop through the array and create a corresponding proxy contructor. Hopefully, the code is fairly easy to follow other than some new types and the dreaded Opcodes. ConstructorBuilder This class defines a new constructor on the type. ILGenerator The ILGenerator allows the use of Reflection.Emit to create the method body. LocalBuilder The local builder allows the storage of data in local variables within a method, in this case it’s the constructed DefaultInterceptor. Constructing the interceptors field The first bit of IL you’ll come across as you follow through the code is the following private method used for constructing the field list of interceptors. private static void ConstructInterceptorsField             (                 FieldBuilder interceptorsField,                 ConstructorInfo interceptorsFieldConstructor,                 ILGenerator cIL             )         {             cIL.Emit(OpCodes.Ldarg_0);             cIL.Emit(OpCodes.Newobj, interceptorsFieldConstructor);             cIL.Emit(OpCodes.Stfld, interceptorsField);         } The first thing to know about generating code using IL is that you are using a stack, if you want to use something, you need to push it up the stack etc. etc. OpCodes.ldArg_0 This opcode is a really interesting one, basically each method has a hidden first argument of the containing class instance (apart from static classes), constructors are no different. This is the reason you can use syntax like this.myField. So back to the method, as we want to instantiate the List in the interceptorsField, first we need to load the class instance onto the stack, we then load the new object (new List<TBase>) and finally we store it in the interceptorsField. Hopefully, that should follow easily enough in the method. In each constructor you would now have this.interceptors = new List<User<int, IRepository>>(); Constructing and storing the DefaultInterceptor The next bit of code we need to create is the constructed DefaultInterceptor. Firstly, we create a local builder to store the constructed type. Create a local builder LocalBuilder defaultInterceptorMethodVariable =     cIL.DeclareLocal(typeof(DefaultInterceptor<>).MakeGenericType(typeof(TBase))); Once our local builder is ready, we then need to construct the DefaultInterceptor<TBase> and store it in the variable. Connstruct DefaultInterceptor private static void ConstructDefaultInterceptor     (         ConstructorInfo defaultInterceptorConstructor,         ILGenerator cIL,         LocalBuilder defaultInterceptorMethodVariable     ) {     cIL.Emit(OpCodes.Newobj, defaultInterceptorConstructor);     cIL.Emit(OpCodes.Stloc, defaultInterceptorMethodVariable); } As you can see, using the ConstructorInfo named defaultInterceptorConstructor, we load the new object onto the stack. Then using the store local opcode (OpCodes.Stloc), we store the new object in the local builder named defaultInterceptorMethodVariable. Add the constructed DefaultInterceptor to the interceptors field collection Using the add method created earlier in this post, we are going to add the new DefaultInterceptor object to the interceptors field collection. Add Default Interceptor private static void AddDefaultInterceptorToInterceptorsList     (         FieldBuilder interceptorsField,         MethodInfo AddDefaultInterceptor,         ILGenerator cIL,         LocalBuilder defaultInterceptorMethodVariable     ) {     cIL.Emit(OpCodes.Ldarg_0);     cIL.Emit(OpCodes.Ldfld, interceptorsField);     cIL.Emit(OpCodes.Ldloc, defaultInterceptorMethodVariable);     cIL.Emit(OpCodes.Callvirt, AddDefaultInterceptor); } So, here’s whats going on. The class instance is first loaded onto the stack using the load argument at index 0 opcode (OpCodes.Ldarg_0) (remember the first arg is the hidden class instance). The interceptorsField is then loaded onto the stack using the load field opcode (OpCodes.Ldfld). We then load the DefaultInterceptor object we stored locally using the load local opcode (OpCodes.Ldloc). Then finally we call the AddDefaultInterceptor method using the call virtual opcode (Opcodes.Callvirt). Completing the constructor The last thing we need to do is complete the constructor. Complete the constructor private static void CreateConstructor(ConstructorInfo constructorInfo, Type[] parameterTypes, ILGenerator cIL)         {             cIL.Emit(OpCodes.Ldarg_0);               if (parameterTypes.Length > 0)             {                 LoadParameterTypes(parameterTypes, cIL);             }               cIL.Emit(OpCodes.Call, constructorInfo);             cIL.Emit(OpCodes.Ret);         }           private static void LoadParameterTypes(Type[] parameterTypes, ILGenerator cIL)         {             for (int i = 1; i <= parameterTypes.Length; i++)             {                 cIL.Emit(OpCodes.Ldarg_S, i);             }         } So, the first thing we do again is load the class instance using the load argument at index 0 opcode (OpCodes.Ldarg_0). We then load each parameter using OpCode.Ldarg_S, this opcode allows us to specify an index position for each argument. We then setup calling the base constructor using OpCodes.Call and the base constructors ConstructorInfo. Finally, all methods are required to return, even when they have a void return. As there are no values on the stack after the OpCodes.Call line, we can safely call the OpCode.Ret to give the constructor a void return. If there was a value, we would have to pop the value of the stack before calling return otherwise, the method would try and return a value. Conclusion This was a slightly hardcore post but hopefully it hasn’t been too hard to follow. The main thing is that a number of the really useful opcodes have been used and now the dynamic proxy is capable of being constructed. If you download the code and debug through the tests at http://rapidioc.codeplex.com/, you’ll be able to create proxies at this point, they cannon do anything in terms of interception but you can happily run the tests, call base methods and properties and also take a look at the created assembly in Reflector. Hope this is useful. The next post should be up soon, it will be covering creating the private methods for calling the base class methods and properties. Kind Regards, Sean.

    Read the article

  • Connect ViewModel and View using Unity

    - by brainbox
    In this post i want to describe the approach of connecting View and ViewModel which I'm using in my last project.The main idea is to do it during resolve inside of unity container. It can be achived using InjectionFactory introduced in Unity 2.0 public static class MVVMUnityExtensions{    public static void RegisterView<TView, TViewModel>(this IUnityContainer container) where TView : FrameworkElement    {        container.RegisterView<TView, TView, TViewModel>();    }    public static void RegisterView<TViewFrom, TViewTo, TViewModel>(this IUnityContainer container)        where TViewTo : FrameworkElement, TViewFrom    {        container.RegisterType<TViewFrom>(new InjectionFactory(            c =>            {                var model = c.Resolve<TViewModel>();                var view = Activator.CreateInstance<TViewTo>();                view.DataContext = model;                return view;            }         ));    }}}And here is the sample how it could be used:var unityContainer = new UnityContainer();unityContainer.RegisterView<IFooView, FooView, FooViewModel>();IFooView view = unityContainer.Resolve<IFooView>(); // view with injected viewmodel in its datacontextPlease tell me your prefered way to connect viewmodel and view.

    Read the article

  • How to factor out data layer in nopCommerce and replace MS SQL with RavenDB?

    - by Kaveh Shahbazian
    I am new to nopCommerce and ecommerce in general but I am involved in an ecommerce project. Now from my past experiences with RavenDB (which mostly were absolutely pleasant) and based on the needs of the business (fast changes with awkward business workflows) It seemed to be an appealing option to have RavenDB handling all sort of things related to the database. I do not understand design and architecture of nopCommerce fully so I did not reach to a conclusion on how to factor data parts, since it seems the services layer actually does not abstract data-layer concepts away; like bringing in EF working model to other layers. I have found another project which used NuDB as it's database as a nopCommerce fork. But it did not help because NuDB still has the feeling of a RDBMS and is not as different as RavenDB. Now first how can I learn about the internals of nopCommerce (other than investigating the code)? It's workflows? It's conventions? Second has anyone tried something similar before with a NoSQL database (say like MongoDB or RavenDB)? Is it possible to achieve this in a 1 (~2) month time frame? Thanks in advance;

    Read the article

  • SQL Server devs–what source control system do you use, if any? (answer and maybe win free stuff)

    - by jamiet
    Recently I noticed a tweet from notable SQL Server author and community dude-at-large Steve Jones in which he asked how many SQL Server developers were putting their SQL Server source code (i.e. DDL) under source control (I’m paraphrasing because I can’t remember the exact tweet and Twitter’s search functionality is useless). The question surprised me slightly as I thought a more pertinent question would be “how many SQL Server developers are not using source control?” because I have been doing just that for many years now and I simply assumed that use of source control is a given in this day and age. Then I started thinking about it. “Perhaps I’m wrong” I pondered, “perhaps the SQL Server folks that do use source control in their day-to-day jobs are in the minority”. So, dear reader, I’m interested to know a little bit more about your use of source control. Are you putting your SQL Server code into a source control system? If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? Why did you make those particular software choices? Any interesting anecdotes to share in regard to your use of source control and SQL Server? To encourage you to contribute I have five pairs of licenses for Red Gate SQL Source Control and Red Gate SQL Connect to give away to what I consider to be the five best replies (“best” is totally subjective of course but this is my blog so my decision is final ), if you want to be considered don’t forget to leave contact details; email address, Twitter handle or similar will do. To start you off and to perhaps get the brain cells whirring, here are my answers to the questions above: Are you putting your SQL Server code into a source control system? As I think I’ve already said…yes. Always. If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? I move around a lot between many clients so it changes on a fairly regular basis; my current client uses Team Foundation Server (aka TFS) and as part of a separate project is trialing the use of Team Foundation Service. I have used SVN extensively in the past which I am a fan of (I generally prefer it to TFS) and am trying to get my head around Git by using it for ObjectStorageHelper. What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? On my current project, Team Explorer. In the past I have used Tortoise to connect to SVN. Why did you make those particular software choices? I generally use whatever the client uses and given that I work with SQL Server I find that the majority of my clients use TFS, I guess simply because they are Microsoft development shops. Any interesting anecdotes to share in regard to your use of source control and SQL Server? Not an anecdote as such but I am going to share some frustrations about TFS. In many ways TFS is a great product because it integrates many separate functions (source control, work item tracking, build agents) into one whole and I’m firmly of the opinion that that is a good thing if for no reason other than being able to associate your check-ins with a work-item. However, like many people there are aspects to TFS source control that annoy me day-in, day-out. Chief among them has to be the fact that it uses a file’s read-only property to determine if a file should be checked-out or not and, if it determines that it should, it will happily do that check-out on your behalf without you even asking it to. I didn’t realise how ridiculous this was until I first used SVN about three years ago – with SVN you make any changes you wish and then use your source control client to determine which files have changed and thus be checked-in; the notion of “check-out” doesn’t even exist. That sounds like a small thing but you don’t realise how liberating it is until you actually start working that way. Hoping to hear some more anecdotes and opinions in the comments. Remember….free software is up for grabs! @jamiet 

    Read the article

  • Connect to localdb using Sql Server management studio

    - by Magnus Karlsson
    I was trying to find my databse for local db under localhost etc but no luck. The following led me to just connect to it, kind of obvious really when you look at your connections string but.. its sunday morning or something.. From: http://blogs.msdn.com/b/sqlexpress/archive/2011/07/12/introducing-localdb-a-better-sql-express.aspx High-Level Overview After the lengthy introduction it's time to take a look at LocalDB from the technical side. At a very high level, LocalDB has the following key properties: LocalDB uses the same sqlservr.exe as the regular SQL Express and other editions of SQL Server. The application is using the same client-side providers (ADO.NET, ODBC, PDO and others) to connect to it and operates on data using the same T-SQL language as provided by SQL Express. LocalDB is installed once on a machine (per major SQL Server version). Multiple applications can start multiple LocalDB processes, but they are all started from the same sqlservr.exe executable file from the same disk location. LocalDB doesn't create any database services; LocalDB processes are started and stopped automatically when needed. The application is just connecting to "Data Source=(localdb)\v11.0" and LocalDB process is started as a child process of the application. A few minutes after the last connection to this process is closed the process shuts down. LocalDB connections support AttachDbFileName property, which allows developers to specify a database file location. LocalDB will attach the specified database file and the connection will be made to it.

    Read the article

  • Spooling in SQL execution plans

    - by Rob Farley
    Sewing has never been my thing. I barely even know the terminology, and when discussing this with American friends, I even found out that half the words that Americans use are different to the words that English and Australian people use. That said – let’s talk about spools! In particular, the Spool operators that you find in some SQL execution plans. This post is for T-SQL Tuesday, hosted this month by me! I’ve chosen to write about spools because they seem to get a bad rap (even in my song I used the line “There’s spooling from a CTE, they’ve got recursion needlessly”). I figured it was worth covering some of what spools are about, and hopefully explain why they are remarkably necessary, and generally very useful. If you have a look at the Books Online page about Plan Operators, at http://msdn.microsoft.com/en-us/library/ms191158.aspx, and do a search for the word ‘spool’, you’ll notice it says there are 46 matches. 46! Yeah, that’s what I thought too... Spooling is mentioned in several operators: Eager Spool, Lazy Spool, Index Spool (sometimes called a Nonclustered Index Spool), Row Count Spool, Spool, Table Spool, and Window Spool (oh, and Cache, which is a special kind of spool for a single row, but as it isn’t used in SQL 2012, I won’t describe it any further here). Spool, Table Spool, Index Spool, Window Spool and Row Count Spool are all physical operators, whereas Eager Spool and Lazy Spool are logical operators, describing the way that the other spools work. For example, you might see a Table Spool which is either Eager or Lazy. A Window Spool can actually act as both, as I’ll mention in a moment. In sewing, cotton is put onto a spool to make it more useful. You might buy it in bulk on a cone, but if you’re going to be using a sewing machine, then you quite probably want to have it on a spool or bobbin, which allows it to be used in a more effective way. This is the picture that I want you to think about in relation to your data. I’m sure you use spools every time you use your sewing machine. I know I do. I can’t think of a time when I’ve got out my sewing machine to do some sewing and haven’t used a spool. However, I often run SQL queries that don’t use spools. You see, the data that is consumed by my query is typically in a useful state without a spool. It’s like I can just sew with my cotton despite it not being on a spool! Many of my favourite features in T-SQL do like to use spools though. This looks like a very similar query to before, but includes an OVER clause to return a column telling me the number of rows in my data set. I’ll describe what’s going on in a few paragraphs’ time. So what does a Spool operator actually do? The spool operator consumes a set of data, and stores it in a temporary structure, in the tempdb database. This structure is typically either a Table (ie, a heap), or an Index (ie, a b-tree). If no data is actually needed from it, then it could also be a Row Count spool, which only stores the number of rows that the spool operator consumes. A Window Spool is another option if the data being consumed is tightly linked to windows of data, such as when the ROWS/RANGE clause of the OVER clause is being used. You could maybe think about the type of spool being like whether the cotton is going onto a small bobbin to fit in the base of the sewing machine, or whether it’s a larger spool for the top. A Table or Index Spool is either Eager or Lazy in nature. Eager and Lazy are Logical operators, which talk more about the behaviour, rather than the physical operation. If I’m sewing, I can either be all enthusiastic and get all my cotton onto the spool before I start, or I can do it as I need it. “Lazy” might not the be the best word to describe a person – in the SQL world it describes the idea of either fetching all the rows to build up the whole spool when the operator is called (Eager), or populating the spool only as it’s needed (Lazy). Window Spools are both physical and logical. They’re eager on a per-window basis, but lazy between windows. And when is it needed? The way I see it, spools are needed for two reasons. 1 – When data is going to be needed AGAIN. 2 – When data needs to be kept away from the original source. If you’re someone that writes long stored procedures, you are probably quite aware of the second scenario. I see plenty of stored procedures being written this way – where the query writer populates a temporary table, so that they can make updates to it without risking the original table. SQL does this too. Imagine I’m updating my contact list, and some of my changes move data to later in the book. If I’m not careful, I might update the same row a second time (or even enter an infinite loop, updating it over and over). A spool can make sure that I don’t, by using a copy of the data. This problem is known as the Halloween Effect (not because it’s spooky, but because it was discovered in late October one year). As I’m sure you can imagine, the kind of spool you’d need to protect against the Halloween Effect would be eager, because if you’re only handling one row at a time, then you’re not providing the protection... An eager spool will block the flow of data, waiting until it has fetched all the data before serving it up to the operator that called it. In the query below I’m forcing the Query Optimizer to use an index which would be upset if the Name column values got changed, and we see that before any data is fetched, a spool is created to load the data into. This doesn’t stop the index being maintained, but it does mean that the index is protected from the changes that are being done. There are plenty of times, though, when you need data repeatedly. Consider the query I put above. A simple join, but then counting the number of rows that came through. The way that this has executed (be it ideal or not), is to ask that a Table Spool be populated. That’s the Table Spool operator on the top row. That spool can produce the same set of rows repeatedly. This is the behaviour that we see in the bottom half of the plan. In the bottom half of the plan, we see that the a join is being done between the rows that are being sourced from the spool – one being aggregated and one not – producing the columns that we need for the query. Table v Index When considering whether to use a Table Spool or an Index Spool, the question that the Query Optimizer needs to answer is whether there is sufficient benefit to storing the data in a b-tree. The idea of having data in indexes is great, but of course there is a cost to maintaining them. Here we’re creating a temporary structure for data, and there is a cost associated with populating each row into its correct position according to a b-tree, as opposed to simply adding it to the end of the list of rows in a heap. Using a b-tree could even result in page-splits as the b-tree is populated, so there had better be a reason to use that kind of structure. That all depends on how the data is going to be used in other parts of the plan. If you’ve ever thought that you could use a temporary index for a particular query, well this is it – and the Query Optimizer can do that if it thinks it’s worthwhile. It’s worth noting that just because a Spool is populated using an Index Spool, it can still be fetched using a Table Spool. The details about whether or not a Spool used as a source shows as a Table Spool or an Index Spool is more about whether a Seek predicate is used, rather than on the underlying structure. Recursive CTE I’ve already shown you an example of spooling when the OVER clause is used. You might see them being used whenever you have data that is needed multiple times, and CTEs are quite common here. With the definition of a set of data described in a CTE, if the query writer is leveraging this by referring to the CTE multiple times, and there’s no simplification to be leveraged, a spool could theoretically be used to avoid reapplying the CTE’s logic. Annoyingly, this doesn’t happen. Consider this query, which really looks like it’s using the same data twice. I’m creating a set of data (which is completely deterministic, by the way), and then joining it back to itself. There seems to be no reason why it shouldn’t use a spool for the set described by the CTE, but it doesn’t. On the other hand, if we don’t pull as many columns back, we might see a very different plan. You see, CTEs, like all sub-queries, are simplified out to figure out the best way of executing the whole query. My example is somewhat contrived, and although there are plenty of cases when it’s nice to give the Query Optimizer hints about how to execute queries, it usually doesn’t do a bad job, even without spooling (and you can always use a temporary table). When recursion is used, though, spooling should be expected. Consider what we’re asking for in a recursive CTE. We’re telling the system to construct a set of data using an initial query, and then use set as a source for another query, piping this back into the same set and back around. It’s very much a spool. The analogy of cotton is long gone here, as the idea of having a continual loop of cotton feeding onto a spool and off again doesn’t quite fit, but that’s what we have here. Data is being fed onto the spool, and getting pulled out a second time when the spool is used as a source. (This query is running on AdventureWorks, which has a ManagerID column in HumanResources.Employee, not AdventureWorks2012) The Index Spool operator is sucking rows into it – lazily. It has to be lazy, because at the start, there’s only one row to be had. However, as rows get populated onto the spool, the Table Spool operator on the right can return rows when asked, ending up with more rows (potentially) getting back onto the spool, ready for the next round. (The Assert operator is merely checking to see if we’ve reached the MAXRECURSION point – it vanishes if you use OPTION (MAXRECURSION 0), which you can try yourself if you like). Spools are useful. Don’t lose sight of that. Every time you use temporary tables or table variables in a stored procedure, you’re essentially doing the same – don’t get upset at the Query Optimizer for doing so, even if you think the spool looks like an expensive part of the query. I hope you’re enjoying this T-SQL Tuesday. Why not head over to my post that is hosting it this month to read about some other plan operators? At some point I’ll write a summary post – once I have you should find a comment below pointing at it. @rob_farley

    Read the article

  • Database Developers Can Now Save 20%

    - by stephen.garth
    Database developers can now increase productivity and save money at the same time. For a limited time, Oracle Store is offering a 20% discount on Oracle SQL Developer Data Modeler. Just enter the code SQLDDM at checkout to get the discount. Oracle SQL Developer Data Modeler is an independent, standalone product with a full spectrum of data and database modeling tools and utilities, including modeling for Entity Relationship Diagrams (ERD), Relational (database design), Data Type and Multi-dimensional modeling, full forward and reverse engineering and DDL code generation. SQL Developer Data Modeler can connect to any supported Oracle Database and is platform independent. Save 20% on Oracle SQL Developer Data Modeler at Oracle Store - Discount Code SQLDDM Find out more about Oracle SQL Developer and Oracle SQL Developer Data Modeler var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Test interface implementation

    - by Michael
    I have a interface in our code base that I would like to be able to mock out for unit testing. I am writing a test implementation to allow the individual tests to be able to override the specific methods they are concerned with rather than implementing every method. I've run into a quandary over how the test implementation should behave if the test fails to override a method used by the method under test. Should I return a "non-value" (0, null) in the test implementation or throw a UnsupportedOperationException to explicitly fail the test?

    Read the article

  • SQL Saturday #220 - Atlanta - Pre-Conference Scholarships!

    - by Most Valuable Yak (Rob Volk)
    We Want YOU…To Learn! AtlantaMDF and Idera are teaming up to find a few good people. If you are: A student looking to work in the database or business intelligence fields A database professional who is between jobs or wants a better one A developer looking to step up to something new On a limited budget and can’t afford professional SQL Server training Able to attend training from 9 to 5 on May 17, 2013 AtlantaMDF is presenting 5 Pre-Conference Sessions (pre-cons) for SQL Saturday #220! And thanks to Idera’s sponsorship, we can offer one free ticket to each of these sessions to eligible candidates! That means one scholarship per Pre-Con! One Recipient Each will Attend: Denny Cherry: SQL Server Security http://sqlsecurity.eventbrite.com/ Adam Machanic: Surfing the Multicore Wave: Processors, Parallelism, and Performance http://surfmulticore.eventbrite.com/ Stacia Misner: Languages of BI http://languagesofbi.eventbrite.com/ Bill Pearson: Practical Self-Service BI with PowerPivot for Excel http://selfservicebi.eventbrite.com/ Eddie Wuerch: The DBA Skills Upgrade Toolkit http://dbatoolkit.eventbrite.com/ If you are interested in attending these pre-cons send an email by April 30, 2013 to [email protected] and tell us: Why you are a good candidate to receive this scholarship Which sessions you’d like to attend, and why (list multiple sessions in order of preference) What the session will teach you and how it will help you achieve your goals The emails will be evaluated by the good folks at Midlands PASS in Columbia, SC. The recipients will be notified by email and announcements made on May 6, 2013. GOOD LUCK! P.S. - Don't forget that SQLSaturday #220 offers free* training in addition to the pre-cons! You can find more information about SQL Saturday #220 at http://www.sqlsaturday.com/220/eventhome.aspx. View the scheduled sessions at http://www.sqlsaturday.com/220/schedule.aspx and register for them at http://www.sqlsaturday.com/220/register.aspx. * Registration charges a $10 fee to cover lunch expenses.

    Read the article

  • How to build a Singleton-like dependency injector replacement (Php)

    - by Erparom
    I know out there are a lot of excelent containers, even frameworks almost entirely DI based with good strong IoC classes. However, this doesn't help me to "define" a new pattern. (This is Php code but understandable to anyone) Supose we have: //Declares the singleton class bookSingleton { private $author; private static $bookInstance; private static $isLoaned = FALSE; //The private constructor private function __constructor() { $this->author = "Onecrappy Writer Ofcheap Novels"; } //Sets the global isLoaned state and also gets self instance public static function loanBook() { if (self::$isLoaned === FALSE) { //Book already taken, so return false return FALSE; } else { //Ok, not loaned, lets instantiate (if needed and loan) if (!isset(self::$bookInstance)) { self::$bookInstance = new BookSingleton(); } self::$isLoaned = TRUE; } } //Return loaned state to false, so another book reader can take the book public function returnBook() { $self::$isLoaned = FALSE; } public function getAuthor() { return $this->author; } } Then we get the singelton consumtion class: //Consumes the Singleton class BookBorrower() { private $borrowedBook; private $haveBookState; public function __construct() { this->haveBookState = FALSE; } //Use the singelton-pattern behavior public function borrowBook() { $this->borrowedBook = BookSingleton::loanBook(); //Check if was successfully borrowed if (!this->borrowedBook) { $this->haveBookState = FALSE; } else { $this->haveBookState = TRUE; } } public function returnBook() { $this->borrowedBook->returnBook(); $this->haveBookState = FALSE; } public function getBook() { if ($this->haveBookState) { return "The book is loaned, the author is" . $this->borrowedbook->getAuthor(); } else { return "I don't have the book, perhaps someone else took it"; } } } At last, we got a client, to test the behavior function __autoload($class) { require_once $class . '.php'; } function write ($whatever,$breaks) { for($break = 0;$break<$breaks;$break++) { $whatever .= "\n"; } echo nl2br($whatever); } write("Begin Singleton test", 2); $borrowerJuan = new BookBorrower(); $borrowerPedro = new BookBorrower(); write("Juan asks for the book", 1); $borrowerJuan->borrowBook(); write("Book Borrowed? ", 1); write($borrowerJuan->getAuthorAndTitle(),2); write("Pedro asks for the book", 1); $borrowerPedro->borrowBook(); write("Book Borrowed? ", 1); write($borrowerPedro->getAuthorAndTitle(),2); write("Juan returns the book", 1); $borrowerJuan->returnBook(); write("Returned Book Juan? ", 1); write($borrowerJuan->getAuthorAndTitle(),2); write("Pedro asks again for the book", 1); $borrowerPedro->borrowBook(); write("Book Borrowed? ", 1); write($borrowerPedro->getAuthorAndTitle(),2); This will end up in the expected behavior: Begin Singleton test Juan asks for the book Book Borrowed? The book is loaned, the author is = Onecrappy Writer Ofcheap Novels Pedro asks for the book Book Borrowed? I don't have the book, perhaps someone else took it Juan returns the book Returned Book Juan? I don't have the book, perhaps someone else took it Pedro asks again for the book Book Borrowed? The book is loaned, the author is = Onecrappy Writer Ofcheap Novels So I want to make a pattern based on the DI technique able to do exactly the same, but without singleton pattern. As far as I'm aware, I KNOW I must inject the book inside "borrowBook" function instead of taking a static instance: public function borrowBook(BookNonSingleton $book) { if (isset($this->borrowedBook) || $book->isLoaned()) { $this->haveBook = FALSE; return FALSE; } else { $this->borrowedBook = $book; $this->haveBook = TRUE; return TRUE; } } And at the client, just handle the book: $borrowerJuan = new BookBorrower(); $borrowerJuan-borrowBook(new NonSingletonBook()); Etc... and so far so good, BUT... Im taking the responsability of "single instance" to the borrower, instead of keeping that responsability inside the NonSingletonBook, that since it has not anymore a private constructor, can be instantiated as many times... making instances on each call. So, What does my NonSingletonBook class MUST be in order to never allow borrowers to have this same book twice? (aka) keep the single instance. Because the dependency injector part of the code (borrower) does not solve me this AT ALL. Is it needed the container with an "asShared" method builder with static behavior? No way to encapsulate this functionallity into the Book itself? "Hey Im a book and I shouldn't be instantiated more than once, I'm unique"

    Read the article

  • SQL Server: How do I generate the table schema and populate it with inserts in a script?

    - by Paula DiTallo
    Originally posted on: http://geekswithblogs.net/AskPaula/archive/2014/05/20/156469.aspx In SSMS, there's a Generate Script utility (read:  only available under version 2008 and up) . Here are the steps you would need to take to make use of the utility: Right click on the database you're interested in and go to Tasks -> Generate ScriptsSelect the tables and/or any other objects you'd like in order to get them into the script.Navigate to Set scripting options. Click on Advanced.Under the General category, navigate to Type of data to scriptSelect the Schema and Data option to get the insert statements generated. Click OK.

    Read the article

  • Reading a large SQL Errorlog

    - by steveh99999
    I came across an interesting situation recently where a SQL instance had been configured with the Audit of successful and failed logins being written to the errorlog. ie This meant… every time a user or the application connected to the SQL instance – an entry was written to the errorlog. This meant…  huge SQL Server errorlogs. Opening an errorlog in the usual way, using SQL management studio, was extremely slow… Luckily, I was able to use xp_readerrorlog to work around this – here’s some example queries..   To show errorlog entries from the currently active log, just for today :- DECLARE @now DATETIME DECLARE @midnight DATETIME SET @now = GETDATE() SET @midnight =  DATEADD(d, DATEDIFF(d, 0, getdate()), 0) EXEC xp_readerrorlog 0,1,NULL,NULL,@midnight,@now   To find out how big the current errorlog actually is, and what the earliest and most recent entries are in the errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT COUNT(*) AS 'Number of entries in errorlog', MIN(logdate) AS 'ErrorLog Starts', MAX(logdate) AS 'ErrorLog Ends' FROM #temp_errorlog DROP TABLE #temp_errorlog To show just DBCC history  information in the current errorlog :- EXEC xp_readerrorlog 0,1,'dbcc'   To show backup errorlog entries in the current errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT * from #temp_errorlog WHERE ProcessInfo = 'Backup' ORDER BY Logdate DROP TABLE #temp_errorlog XP_Errorlog is an undocumented system stored procedure – so no official Microsoft link describing the parameters it takes – however,  there’s a good blog on this here And, if you do have a problem with huge errorlogs – please consider running system stored procedure  sp_cycle_errorlog on a nightly or regular basis.  But if you do this,  remember to change the amount of errorlogs you do retain – the default of 6 might not be sufficient for you….

    Read the article

  • How to TDD test that objects are being added to a collection if the collection is private?

    - by Joshua Harris
    Assume that I planned to write a class that worked something like this: public class GameCharacter { private Collection<CharacterEffect> _collection; public void Add(CharacterEffect e) { ... } public void Remove(CharacterEffect e) { ... } public void Contains(CharacterEffect e) { ... } } When added an effect does something to the character and is then added to the _collection. When it is removed the effect reverts the change to the character and is removed from the _collection. It's easy to test if the effect was applied to the character, but how do I test that the effect was added to _collection? What test could I write to start constructing this class. I could write a test where Contains would return true for a certain effect being in _collection, but I can't arrange a case where that function would return true because I haven't implemented the Add method that is needed to place things in _collection. Ok, so since Contains is dependent on having Add working, then why don't I try to create Add first. Well for my first test I need to try and figure out if the effect was added to the _collection. How would I do that? The only way to see if an effect is in _collection is with the Contains function. The only way that I could think to test this would be to use a FakeCollection that Mocks the Add, Remove, and Contains of a real collection, but I don't want _collection being affected by outside sources. I don't want to add a setEffects(Collection effects) function, because I do not want the class to have that functionality. The one thing that I am thinking could work is this: public class GameCharacter<C extends Collection> { private Collection<CharacterEffect> _collection; public GameCharacter() { _collection = new C<CharacterEffect>(); } } But, that is just silly making me declare what some private data structures type is on every declaration of the character. Is there a way for me to test this without breaking TDD principles while still allowing me to keep my collection private?

    Read the article

  • Visual Basic link to SQL output to Word

    - by CLO_471
    I am in need of some advice/references. I am currently trying to develop a legal document interface. There are certain fields in which I need to query out of my sql db and have those fields output into a document that can be printed. I am trying to develop a user interface where people can enter fields that will output to a document template but at the same time I need the template to be able to pull data from the SQL database. This is the reason why I think that VB might be my best choice and because it is one of the only OOP languages I am familiar with presently. Does anyone know that best way to be able to handle this type of job?? I know that you can use VBA within MS Word and have the form output variables to a word template. But, is there a way to have the word document also pull information from the SQL db? Is the best option to use VB linked to SQL and run queries to get the information from the database and then have it output to a for within VB? Is it possible for VB to be linked to a SQL db and output variables and SQL fields to a Word Template? I have looked into Mail Merge and I see that it allows users to pull data from an Access query but I dont think it would be easy to automate and it seems that users would need to have an advanced knowledge of MS Word and Access to handle this. I am not finding much useful information online so I came here. Any advice or references would be greatly appreciated. If there is a better way please let me know.

    Read the article

  • OLL Live webcast - Using SQL for Pattern Matching in Oracle Database

    - by KLaker
    If you are interested in learning about our exciting new 12c SQL pattern matching feature then mark your diaries. On Wednesday, October 30th at 8:00 am (US/Pacific time zone) Supriya Ananth, who is one of our top curriculum developers at Oracle, will be hosting an OLL webcast on our new SQL pattern matching feature. The ability to recognize patterns in a sequence of rows has been a capability that was widely desired, but not possible with SQL until now. Row pattern matching in native SQL improves application and development productivity and query efficiency for row-sequence analysis. With Oracle Database 12c you can use the new MATCH_RECOGNIZE clause to perform pattern matching in SQL to do the following: Logically partition and order the data using the PARTITION BY and ORDER BY clauses Use regular expressions syntax to define patterns of rows to seek using the PATTERN clause. These patterns a powerful and expressive feature, applied to the pattern variables you define. Specify the logical conditions required to map a row to a row pattern variable in the DEFINE clause. Define measures, which are expressions usable in the MEASURES clause of the SQL query. For more information and to register for this exciting webcast please visit the OLL Live website, see here: https://apex.oracle.com/pls/apex/f?p=44785:145:116820049307135::::P145_EVENT_ID,P145_PREV_PAGE:461,143.  Please note - if the above link does not work then go to OLL (https://apex.oracle.com/pls/apex/f?p=44785:1:) and click the OLL Live icon (upper right, beneath the Login link or logout link if you are already logged in). The pattern matching webcast is listed on the calendar of events on 30 October.

    Read the article

  • How to use DI and DI containers

    - by Pinetree
    I am building a small PHP mvc framework (yes, yet another one), mostly for learning purposes, and I am trying to do it the right way, so I'd like to use a DI container, but I am not asking which one to use but rather how to use one. Without going into too much detail, the mvc is divided into modules which have controllers which render views for actions. This is how a request is processed: a Main object instantiates a Request object, and a Router, and injects the Request into the Router to figure out which module was called. then it instantiates the Module object and sends the Request to that the Module creates a ModuleRouter and sends the Request to figure out the controller and action it then creates the Controller and the ViewRenderer, and injects the ViewRenderer into the Controller (so that the controller can send data to the view) the ViewRenderer needs to know which module, controller and action were called to figure out the path to the view scripts, so the Module has to figure out this and inject it to the ViewRenderer the Module then calls the action method on the controller and calls the render method on the ViewRenderer For now, I do not have any DI container set up, but what I do have are a bunch of initX() methods that create the required component if it is not already there. For instance, the Module has the initViewRenderer() method. These init methods get called right before that component is needed, not before, and if the component was already set it will not initialize it. This allows for the components to be switched, but it does not require manually setting them if they are not there. Now, I'd like to do this by implementing a DI container, but still keep the manual configuration to a bare minimum, so if the directory structure and naming convention is followed, everything should work, without even touching the config. If I use the DI container, do I then inject it into everything (the container would inject itself when creating a component), so that other components can use it? When do I register components with the DI? Can a component register other components with the DI during run-time? Do I create a 'common' config and use that? How do I then figure out on the fly which components I need and how they need to be set up? If Main uses Router which uses Request, Main then needs to use the container to get Module (or does the module need to be found and set beforehand? How?) Module uses Router but needs to figure out the settings for the ViewRenderer and the Controller on the fly, not in advance, so my DI container can't be setting those on the Module before the module figures out the controller and action... What if the controller needs some other service? Do I inject the container into every controller? If I start doing that, I might just inject it into everything... Basically I am looking for the best practices when dealing with stuff like this. I know what DI is and what DI containers do, but I am looking for guidance to using them in real life, and not some isolated examples on the net. Sorry for the lengthy post and many thanks in advance.

    Read the article

  • SQL Server 2008 R2: These are a Few of My Favorite Things

    - by smisner
    This month's T-SQL Tuesday is hosted by Jorge Segarra (blog | twitter) who decided that we should write about our favorite new feature in SQL Server 2008 R2. The majority of my published works concentrates on Reporting Services, so the obious answer for me is about favorite new features is...Reporting Services. I can't pick just one thing in Reporting Services, so instead I thought I'd compile a list of my posts of the new features in Reporting Services 2008 R2: Map Wizard for spatial data (The World is But a Stage) Pagination features (I've Got Your Page Number) Lookup functions (Look Up, Look Down, Look All Around - Part I, Part II, Part III) Test Connection button (Testing, Testing 1-2-3) Conditional formatting based on format, i.e. RenderFormat (As You Like It) And I wrote an overview of the business intelligence features in SQL Server 2008 R2 for Microsoft Press in the free e-book, Introducing Microsoft SQL Server 2008 R2, if you're curious about what else is new in both the BI platform as well as the relational engine. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Could someone help me understand SQL TDE Database encryption?

    - by SLC
    I don't quite follow how it works. According to the MSDN Article there is a big hierarchy of keys protecting other keys and passwords. At some point the database is encrypted. You query the database which is encrypted, and it works seamlessly. If you're able to simply connect to the database as normal and not have to worry about any of the encryption from a developer point of view, how exactly is it secure? Surely anyone can simply connect and do select * from x and the data is revealed. Sorry my question is a bit scattered, I am just very confused by the article.

    Read the article

  • At what size of data does it become beneficial to move from SQL to NoSQL?

    - by wobbily_col
    As a relational database programmer (most of the time), I read articles about how relational databases don't scale, and NoSQL solutions such as MongoDB do. As most of the databases I have developed so far have been small to mid scale, I have never had a problem that hasn't been solved by some indexing, query optimization or schema redesign. What sort of size would I expect to see MySQL struggling with. How many rows? (I know this is going to depend on the application, and type of data stored. the one that got me thing was basically a genetics database, so would have one main table, with 3 or 4 lookup tables. The main table will contain amongst other things, a chromosome reference, and a position coordinate. It will likely get queried for a number of entries between two potions on a chromosome, to see what is stored there).

    Read the article

  • Explicitly pass context object versus injecting with IoC

    - by SonOfPirate
    I have a layered service application where the service layer delegates operations into the domain layer for execution. Many of these operations need to know the context under which they are operation. (The context included the identity of the current user, culture information, etc. received from the caller.) For example, I have an API method that returns a list of announcements. The list is based on the current user's role and each announcement is localized to their culture. The API is a thin-facade that delegates to an Application Service in my domain layer. The Application Service method obviously needs to know the context of the current request/operation as another call to the same API from another user should result in a different list. Within this method, we also have logging that uses some of the context information so we a clear understanding of the context when the operation was performed (this is especially useful if something goes wrong.) While this is a contrived example, in the real world, my Application Services will coordinate operations with many collaborative components, any number of them also needing the context information. My choice is to pass the context to the Application Service which would then pass it with any calls to collaborators or have the IoC container satisfy the dependency the Application Service and any collaborators have on the context. I am wondering if it is considered good/bad, best practices/code smell, etc. if I pass the context object as a parameter to the domain methods or if injecting the context via an IoC container is preferred. (EDIT: I should mention that the context object is instantiated per-request.)

    Read the article

  • The overlooked OUTPUT clause

    - by steveh99999
    I often find myself applying ad-hoc data updates to production systems – usually running scripts written by other people. One of my favourite features of SQL syntax is the OUTPUT clause – I find this is rarely used, and I often wonder if this is due to a lack of awareness of this feature.. The OUTPUT clause was added to SQL Server in the SQL 2005 release – so has been around for quite a while now, yet I often see scripts like this… SELECT somevalue FROM sometable WHERE keyval = XXX UPDATE sometable SET somevalue = newvalue WHERE keyval = XXX -- now check the update has worked… SELECT somevalue FROM sometable WHERE keyval = XXX This can be rewritten to achieve the same end-result using the OUTPUT clause. UPDATE sometable SET somevalue = newvalue OUTPUT deleted.somevalue AS ‘old value’,              inserted.somevalue AS ‘new value’ WHERE keyval = XXX The Update statement with output clause also requires less IO - ie I've replaced three SQL Statements with one, using only a third of the IO.  If you are not aware of the power of the output clause – I recommend you look at the output clause in books online And finally here’s an example of the output produced using the Northwind database…  

    Read the article

  • Using SQL tables for storing user created level stats. Is there a better way?

    - by Ivan
    I am developing a racing game in which players can create their own tracks and upload them to a server. Players will be able to compare their best track times to their friends and see world records. I was going to generate a table for each track submitted to store the best times of each player who plays the track. However, I can't predict how many will be uploaded and I imagine too many tables might cause problems, or is this a valid method? I considered saving each player's best times in a string in a single table field like so: level1:00.45;level2:00.43;level3:00.12 If I did this I wouldn't need a separate table for each level (each level could just have a row in a 'WorldRecords' table). However, this just causes another problem because the text would eventually reach the limit for varchar length. I also considered storing the times data in XML files. This would avoid database issues and server disk space can be increased if needed. But I imagine this would be very slow. To update one players best time on one level, I would have to check every node in the file to find their time record to update. Apologies for the wall of text. Any suggestions would be appreciated.

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >