Search Results

Search found 37074 results on 1483 pages for 'define method'.

Page 864/1483 | < Previous Page | 860 861 862 863 864 865 866 867 868 869 870 871  | Next Page >

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Does Test Driven Development (TDD) improve Quality and Correctness? (Part 1)

    - by David V. Corbin
    Since the dawn of the computer age, various methodologies have been introduced to improve quality and reduce cost. In this posting, I will by sharing my experiences with Test Driven Development; both its benefits and limitations. To start this topic, we need to agree on what TDD is. The first is to define each of the three words as used in this context. Test - An item or action which measures something in some quantifiable form. Driven - The primary motivation or focus of a series of activities (process) Development - All phases of a software project/product from concept through delivery. The above are very simple definitions that result in the following: "TDD is a process where the primary focus is on measuring and quantifying all aspects of the creation of a (software) product." There are many places where TDD is used outside of software development, even though it is not known by this name. Consider the (conventional) education process that most of us grew up on. The focus was to get the best grades as measured by different tests. Many of these tests measured rote memorization and not understanding of the subject matter. The result of this that many people graduated with high scores but without "quality and correctness" in their ability to utilize the subject matter (of course, the flip side is true where certain people DID understand the material but were not very good at taking this type of test). Returning to software development, let us look at some common scenarios. While these items are generally applicable regardless of platform, language and tools; the remainder of this post will utilize Microsoft Visual Studio and Team Foundation Server (TFS) for examples. It should be realized that everyone does at least some aspect of TDD. At the most rudimentary level, getting a program to compile involves a "pass/fail" measurement (is the syntax valid) that drives their ability to proceed further (run the program). Other developers may create "Unit Tests" in the belief that having a test for every method/property of a class and good code coverage is the goal of TDD. These items may be helpful and even important, but really only address a small aspect of the overall effort. To see TDD in a bigger view, lets identify the various activities that are part of the Software Development LifeCycle. These are going to be presented in a Waterfall style for simplicity, but each item also occurs within Iterative methodologies such as Agile/Scrum. the key ones here are: Requirements Gathering Architecture Design Implementation Quality Assurance Can each of these items be subjected to a process which establishes metrics (quantified metrics) that reflect both the quality and correctness of each item? It should be clear that conventional Unit Tests do not apply to all of these items; at best they can verify that a local aspect (e.g. a Class/Method) of implementation matches the (test writers perspective of) the appropriate design document. So what can we do? For each of area, the goal is to create tests that are quantifiable and durable. The ability to quantify the measurements (beyond a simple pass/fail) is critical to tracking progress(eventually measuring the level of success that has been achieved) and for providing clear information on what items need to be addressed (along with the appropriate time to address them - in varying levels of detail) . Durability is important so that the test can be reapplied (ideally in an automated fashion) over the entire cycle. Returning for a moment back to our "education example", one must also be careful of how the tests are organized and how the measurements are taken. If a test is in a multiple choice format, there is a significant statistical probability that a correct answer might be the result of a random guess. Also, in many situations, having the student simply provide a final answer can obscure many important elements. For example, on a math test, having the student simply provide a numeric answer (rather than showing the methodology) may result in a complete mismatch between the process and the result. It is hard to determine which is worse: The student who makes a simple arithmetric error at one step of a long process (resulting in a wrong answer) or The student who (without providing the "workflow") uses a completely invalid approach, yet still comes up with the right number. The "Wrong Process"/"Right Answer" is probably the single biggest problem in software development. Even very simple items can suffer from this. As an example consider the following code for a "straight line" calculation....Is it correct? (for Integral Points)         int Solve(int m, int b, int x) { return m * x + b; }   Most people would respond "Yes". But let's take the question one step further... Is it correct for all possible values of m,b,x??? (no fair if you cheated by being focused on the bolded text!)  Without additional information regarding constrains on "the possible values of m,b,x" the answer must be NO, there is the risk of overflow/wraparound that will produce an incorrect result! To properly answer this question (i.e. Test the Code), one MUST be able to backtrack from the implementation through the design, and architecture all the way back to the requirements. And the requirement itself must be tested against the stakeholder(s). It is only when the bounding conditions are defined that it is possible to determine if the code is "Correct" and has "Quality". Yet, how many of us (myself included) have written such code without even thinking about it. In many canses we (think we) "know" what the bounds are, and that the code will be correct. As we all know, requirements change, "code reuse" causes implementations to be applied to different scenarios, etc. This leads directly to the types of system failures that plague so many projects. This approach to TDD is much more holistic than ones which start by focusing on the details. The fundamental concepts still apply: Each item should be tested. The test should be defined/implemented before (or concurrent with) the definition/implementation of the actual item. We also add concepts that expand the scope and alter the style by recognizing: There are many things beside "lines of code" that benefit from testing (measuring/evaluating in a formal way) Correctness and Quality can not be solely measured by "correct results" In the future parts, we will examine in greater detail some of the techniques that can be applied to each of these areas....

    Read the article

  • Sliding collision response

    - by dbostream
    I have been reading plenty of tutorials about sliding collision responses yet I am not able to implement it properly in my project. What I want to do is make a puck slide along the rounded corner boards of a hockey rink. In my latest attempt the puck does slide along the boards but there are some strange velocity behaviors. First of all the puck slows down a lot pretty much right away and then it slides for awhile and stops before exiting the corner. Even if I double the speed I get a similar behavior and the puck does not make it out of the corner. I used some ideas from this document http://www.peroxide.dk/papers/collision/collision.pdf. This is what I have: Update method called from the game loop when it is time to update the puck (I removed some irrelevant parts). I use two states (current, previous) which are used to interpolate the position during rendering. public override void Update(double fixedTimeStep) { /* Acceleration is set to 0 for now. */ Acceleration.Zero(); PreviousState = CurrentState; _collisionRecursionDepth = 0; CurrentState.Position = SlidingCollision(CurrentState.Position, CurrentState.Velocity * fixedTimeStep + 0.5 * Acceleration * fixedTimeStep * fixedTimeStep); /* Should not this be affected by a sliding collision? and not only the position. */ CurrentState.Velocity = CurrentState.Velocity + Acceleration * fixedTimeStep; Heading = Vector2.NormalizeRet(CurrentState.Velocity); } private Vector2 SlidingCollision(Vector2 position, Vector2 velocity) { if(_collisionRecursionDepth > 5) return position; bool collisionFound = false; Vector2 futurePosition = position + velocity; Vector2 intersectionPoint = new Vector2(); Vector2 intersectionPointNormal = new Vector2(); /* I did not include the collision detection code, if a collision is detected the intersection point and normal in that point is returned. */ if(!collisionFound) return futurePosition; /* If no collision was detected it is safe to move to the future position. */ /* It is not exactly the intersection point, but slightly before. */ Vector2 newPosition = intersectionPoint; /* oldVelocity is set to the distance from the newPosition(intersection point) to the position it had moved to had it not collided. */ Vector2 oldVelocity = futurePosition - newPosition; /* Project the distance left to move along the intersection normal. */ Vector2 newVelocity = oldVelocity - intersectionPointNormal * oldVelocity.DotProduct(intersectionPointNormal); if(newVelocity.LengthSq() < 0.001) return newPosition; /* If almost no speed, no need to continue. */ _collisionRecursionDepth++; return SlidingCollision(newPosition, newVelocity); } What am I doing wrong with the velocity? I have been staring at this for very long so I have gone blind. I have tried different values of recursion depth but it does not seem to make it better. Let me know if you need more information. I appreciate any help. EDIT: A combination of Patrick Hughes' and teodron's answers solved the velocity problem (I think), thanks a lot! This is the new code: I decided to use a separate recursion method now too since I don't want to recalculate the acceleration in each recursion. public override void Update(double fixedTimeStep) { Acceleration.Zero();// = CalculateAcceleration(fixedTimeStep); PreviousState = new MovingEntityState(CurrentState.Position, CurrentState.Velocity); CurrentState = SlidingCollision(CurrentState, fixedTimeStep); Heading = Vector2.NormalizeRet(CurrentState.Velocity); } private MovingEntityState SlidingCollision(MovingEntityState state, double timeStep) { bool collisionFound = false; /* Calculate the next position given no detected collision. */ Vector2 futurePosition = state.Position + state.Velocity * timeStep; Vector2 intersectionPoint = new Vector2(); Vector2 intersectionPointNormal = new Vector2(); /* I did not include the collision detection code, if a collision is detected the intersection point and normal in that point is returned. */ /* If no collision was detected it is safe to move to the future position. */ if (!collisionFound) return new MovingEntityState(futurePosition, state.Velocity); /* Set new position to the intersection point (slightly before). */ Vector2 newPosition = intersectionPoint; /* Project the new velocity along the intersection normal. */ Vector2 newVelocity = state.Velocity - 1.90 * intersectionPointNormal * state.Velocity.DotProduct(intersectionPointNormal); /* Calculate the time of collision. */ double timeOfCollision = Math.Sqrt((newPosition - state.Position).LengthSq() / (futurePosition - state.Position).LengthSq()); /* Calculate new time step, remaining time of full step after the collision * current time step. */ double newTimeStep = timeStep * (1 - timeOfCollision); return SlidingCollision(new MovingEntityState(newPosition, newVelocity), newTimeStep); } Even though the code above seems to slide the puck correctly please have a look at it. I have a few questions, if I don't multiply by 1.90 in the newVelocity calculation it doesn't work (I get a stack overflow when the puck enters the corner because the timeStep decreases very slowly - a collision is found early in every recursion), why is that? what does 1.90 really do and why 1.90? Also I have a new problem, the puck does not move parallell to the short side after exiting the curve; to be more exact it moves outside the rink (I am not checking for any collisions with the short side at the moment). When I perform the collision detection I first check that the puck is in the correct quadrant. For example bottom-right corner is quadrant four i.e. circleCenter.X < puck.X && circleCenter.Y puck.Y is this a problem? or should the short side of the rink be the one to make the puck go parallell to it and not the last collision in the corner? EDIT2: This is the code I use for collision detection, maybe it has something to do with the fact that I can't make the puck slide (-1.0) but only reflect (-2.0): /* Point is the current position (not the predicted one) and quadrant is 4 for the bottom-right corner for example. */ if (GeometryHelper.PointInCircleQuadrant(circleCenter, circleRadius, state.Position, quadrant)) { /* The line is: from = state.Position, to = futurePosition. So a collision is detected when from is inside the circle and to is outside. */ if (GeometryHelper.LineCircleIntersection2d(state.Position, futurePosition, circleCenter, circleRadius, intersectionPoint, quadrant)) { collisionFound = true; /* Set the intersection point to slightly before the real intersection point (I read somewhere this was good to do because of floting point precision, not sure exactly how much though). */ intersectionPoint = intersectionPoint - Vector2.NormalizeRet(state.Velocity) * 0.001; /* Normal at the intersection point. */ intersectionPointNormal = Vector2.NormalizeRet(circleCenter - intersectionPoint) } } When I set the intersection point, if I for example use 0.1 instead of 0.001 the puck travels further before it gets stuck, but for all values I have tried (including 0 - the real intersection point) it gets stuck somewhere (but I necessarily not get a stack overflow). Can something in this part be the cause of my problem? I can see why I get the stack overflow when using -1.0 when calculating the new velocity vector; but not how to solve it. I traced the time steps used in the recursion (initial time step is always 1/60 ~ 0.01666): Recursion depth Time step next recursive call [Start recursion, time step ~ 0.016666] 0 0,000985806527246773 [No collision, stop recursion] [Start recursion, time step ~ 0.016666] 0 0,0149596704364629 1 0,0144883449376379 2 0,0143155612984837 3 0,014224925727213 4 0,0141673917461608 5 0,0141265435314026 6 0,0140953966184117 7 0,0140704653746625 ...and so on. As you can see the collision is detected early in every recursive call which means the next time step decreases very slowly thus the recursion depth gets very big - stack overflow.

    Read the article

  • SQLCMD Mode: give it one more chance

    - by Maria Zakourdaev
      - Click on me. Choose me. - asked one forgotten feature when some bored DBA was purposelessly wondering through the Management Studio menu at the end of her long and busy working day. - Why would I use you? I have heard of no one who does. What are you for? - perplexedly wondered aged and wise DBA. At least that DBA thought she was aged and wise though each day tried to prove to her that she wasn't. - I know you. You are quite lazy. Why would you do additional clicks to move from window to window? From Tool to tool ? This is irritating, isn't it? I can run windows system commands, sql statements and much more from the same script, from the same query window! - I have all my tools that I‘m used to, I have Management Studio, Cmd, Powershell. They can do anything for me. I don’t need additional tools. - I promise you, you will like me. – the thing continued to whine . - All right, show me. – she gave up. It’s always this way, she thought sadly, - easier to agree than to explain why you don’t want. - Enable me and then think about anything that you always couldn’t do through the management studio and had to use other tools. - Ok. Google for me the list of greatest features of SQL SERVER 2012. - Well... I’m not sure... Think about something else. - Ok, here is something easy for you. I want to check if file folder exists or if file is there. Though, I can easily do this using xp_cmdshell … - This is easy for me. – rejoiced the feature. By the way, having the items of the menu talking to you usually means you should stop working and go home. Or drink coffee. Or both. Well, aged and wise dba wasn’t thinking about the weirdness of the situation at that moment. - After enabling me, – said unfairly forgotten feature (it was thinking of itself in such manner) – after enabling me you can use all command line commands in the same management studio query window by adding two exclamation marks !! at the beginning of the script line to denote that you want to use cmd command: -Just keep in mind that when using this feature, you are actually running the commands ON YOUR computer and not on SQL server that query window is connected to. This is main difference from using xp_cmdshell which is executing commands on sql server itself. Bottomline, use UNC path instead of local path. - Look, there are much more than that. - The SQLCMD feature was getting exited.- You can get IP of your servers, create, rename and drop folders. You can see the contents of any file anywhere and even start different tools from the same query window: Not so aged and wise DBA was getting interested: - I also want to run different scripts on different servers without changing connection of the query window. - Sure, sure! Another great feature that CMDmode is providing us with and giving more power to querying. Use “:” to use additional features, like :connect that allows you to change connection: - Now imagine, you have one script where you have all your changes, like creating staging table on the DWH staging server, adding fact table to DWH itself and updating stored procedures in the server where reporting database is located. - Now, give me more challenges! - Script out a list of stored procedures into the text files. - You can do it easily by using command :out which will write the query results into the specified text file. The output can be the code of the stored procedure or any data. Actually this is the same as changing the query output into the file instead of the grid. - Now, take all of the scripts and run all of them, one by one, on the different server.  - Easily - Come on... I’m sure that you can not... -Why not? Naturally, I can do it using :r commant which is opening a script and executing it. Look, I can also use :setvar command to define an environment variable in SQLCMD mode. Just note that you have to leave the empty string between :r commands, otherwise it’s not working although I have no idea why. - Wow.- She was really impressed. - Ok, I’ll go to try all those… -Wait, wait! I know how to google the SQL SERVER features for you! This example will open chrome explorer with search results for the “SQL server 2012 top features” ( change the path to suit your PC): “Well, this can be probably useful stuff, maybe this feature is really unfairly forgotten”, thought the DBA while going through the dark empty parking lot to her lonely car. “As someone really wise once said: “It is what we think we know that keeps us from learning. Learn, unlearn and relearn”.

    Read the article

  • Pluggable Rules for Entity Framework Code First

    - by Ricardo Peres
    Suppose you want a system that lets you plug custom validation rules on your Entity Framework context. The rules would control whether an entity can be saved, updated or deleted, and would be implemented in plain .NET. Yes, I know I already talked about plugable validation in Entity Framework Code First, but this is a different approach. An example API is in order, first, a ruleset, which will hold the collection of rules: 1: public interface IRuleset : IDisposable 2: { 3: void AddRule<T>(IRule<T> rule); 4: IEnumerable<IRule<T>> GetRules<T>(); 5: } Next, a rule: 1: public interface IRule<T> 2: { 3: Boolean CanSave(T entity, DbContext ctx); 4: Boolean CanUpdate(T entity, DbContext ctx); 5: Boolean CanDelete(T entity, DbContext ctx); 6: String Name 7: { 8: get; 9: } 10: } Let’s analyze what we have, starting with the ruleset: Only has methods for adding a rule, specific to an entity type, and to list all rules of this entity type; By implementing IDisposable, we allow it to be cancelled, by disposing of it when we no longer want its rules to be applied. A rule, on the other hand: Has discrete methods for checking if a given entity can be saved, updated or deleted, which receive as parameters the entity itself and a pointer to the DbContext to which the ruleset was applied; Has a name property for helping us identifying what failed. A ruleset really doesn’t need a public implementation, all we need is its interface. The private (internal) implementation might look like this: 1: sealed class Ruleset : IRuleset 2: { 3: private readonly IDictionary<Type, HashSet<Object>> rules = new Dictionary<Type, HashSet<Object>>(); 4: private ObjectContext octx = null; 5:  6: internal Ruleset(ObjectContext octx) 7: { 8: this.octx = octx; 9: } 10:  11: public void AddRule<T>(IRule<T> rule) 12: { 13: if (this.rules.ContainsKey(typeof(T)) == false) 14: { 15: this.rules[typeof(T)] = new HashSet<Object>(); 16: } 17:  18: this.rules[typeof(T)].Add(rule); 19: } 20:  21: public IEnumerable<IRule<T>> GetRules<T>() 22: { 23: if (this.rules.ContainsKey(typeof(T)) == true) 24: { 25: foreach (IRule<T> rule in this.rules[typeof(T)]) 26: { 27: yield return (rule); 28: } 29: } 30: } 31:  32: public void Dispose() 33: { 34: this.octx.SavingChanges -= RulesExtensions.OnSaving; 35: RulesExtensions.rulesets.Remove(this.octx); 36: this.octx = null; 37:  38: this.rules.Clear(); 39: } 40: } Basically, this implementation: Stores the ObjectContext of the DbContext to which it was created for, this is so that later we can remove the association; Has a collection - a set, actually, which does not allow duplication - of rules indexed by the real Type of an entity (because of proxying, an entity may be of a type that inherits from the class that we declared); Has generic methods for adding and enumerating rules of a given type; Has a Dispose method for cancelling the enforcement of the rules. A (really dumb) rule applied to Product might look like this: 1: class ProductRule : IRule<Product> 2: { 3: #region IRule<Product> Members 4:  5: public String Name 6: { 7: get 8: { 9: return ("Rule 1"); 10: } 11: } 12:  13: public Boolean CanSave(Product entity, DbContext ctx) 14: { 15: return (entity.Price > 10000); 16: } 17:  18: public Boolean CanUpdate(Product entity, DbContext ctx) 19: { 20: return (true); 21: } 22:  23: public Boolean CanDelete(Product entity, DbContext ctx) 24: { 25: return (true); 26: } 27:  28: #endregion 29: } The DbContext is there because we may need to check something else in the database before deciding whether to allow an operation or not. And here’s how to apply this mechanism to any DbContext, without requiring the usage of a subclass, by means of an extension method: 1: public static class RulesExtensions 2: { 3: private static readonly MethodInfo getRulesMethod = typeof(IRuleset).GetMethod("GetRules"); 4: internal static readonly IDictionary<ObjectContext, Tuple<IRuleset, DbContext>> rulesets = new Dictionary<ObjectContext, Tuple<IRuleset, DbContext>>(); 5:  6: private static Type GetRealType(Object entity) 7: { 8: return (entity.GetType().Assembly.IsDynamic == true ? entity.GetType().BaseType : entity.GetType()); 9: } 10:  11: internal static void OnSaving(Object sender, EventArgs e) 12: { 13: ObjectContext octx = sender as ObjectContext; 14: IRuleset ruleset = rulesets[octx].Item1; 15: DbContext ctx = rulesets[octx].Item2; 16:  17: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Added)) 18: { 19: Object entity = entry.Entity; 20: Type realType = GetRealType(entity); 21:  22: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 23: { 24: if (rule.CanSave(entity, ctx) == false) 25: { 26: throw (new Exception(String.Format("Cannot save entity {0} due to rule {1}", entity, rule.Name))); 27: } 28: } 29: } 30:  31: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Deleted)) 32: { 33: Object entity = entry.Entity; 34: Type realType = GetRealType(entity); 35:  36: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 37: { 38: if (rule.CanDelete(entity, ctx) == false) 39: { 40: throw (new Exception(String.Format("Cannot delete entity {0} due to rule {1}", entity, rule.Name))); 41: } 42: } 43: } 44:  45: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Modified)) 46: { 47: Object entity = entry.Entity; 48: Type realType = GetRealType(entity); 49:  50: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 51: { 52: if (rule.CanUpdate(entity, ctx) == false) 53: { 54: throw (new Exception(String.Format("Cannot update entity {0} due to rule {1}", entity, rule.Name))); 55: } 56: } 57: } 58: } 59:  60: public static IRuleset CreateRuleset(this DbContext context) 61: { 62: Tuple<IRuleset, DbContext> ruleset = null; 63: ObjectContext octx = (context as IObjectContextAdapter).ObjectContext; 64:  65: if (rulesets.TryGetValue(octx, out ruleset) == false) 66: { 67: ruleset = rulesets[octx] = new Tuple<IRuleset, DbContext>(new Ruleset(octx), context); 68: 69: octx.SavingChanges += OnSaving; 70: } 71:  72: return (ruleset.Item1); 73: } 74: } It relies on the SavingChanges event of the ObjectContext to intercept the saving operations before they are actually issued. Yes, it uses a bit of dynamic magic! Very handy, by the way! So, let’s put it all together: 1: using (MyContext ctx = new MyContext()) 2: { 3: IRuleset rules = ctx.CreateRuleset(); 4: rules.AddRule(new ProductRule()); 5:  6: ctx.Products.Add(new Product() { Name = "xyz", Price = 50000 }); 7:  8: ctx.SaveChanges(); //an exception is fired here 9:  10: //when we no longer need to apply the rules 11: rules.Dispose(); 12: } Feel free to use it and extend it any way you like, and do give me your feedback! As a final note, this can be easily changed to support plain old Entity Framework (not Code First, that is), if that is what you are using.

    Read the article

  • EU Digital Agenda scores 85/100

    - by trond-arne.undheim
    If the Digital Agenda was a bottle of wine and I were wine critic Robert Parker, I would say the Digital Agenda has "a great bouquet, many good elements, with astringent, dry and puckering mouth feel that will not please everyone, but still displaying some finesse. A somewhat controlled effort with no surprises and a few noticeable flaws in the delivery. Noticeably shorter aftertaste than advertised by the producers. Score: 85/100. Enjoy now". The EU Digital Agenda states that "standards are vital for interoperability" and has a whole chapter on interoperability and standards. With this strong emphasis, there is hope the EU's outdated standardization system finally is headed for reform. It has been 23 years since the legal framework of standardisation was completed by Council Decision 87/95/EEC8 in the Information and Communications Technology (ICT) sector. Standardization is market driven. For several decades the IT industry has been developing standards and specifications in global open standards development organisations (fora/consortia), many of which have transparency procedures and practices far superior to the European Standards Organizations. The Digital Agenda rightly states: "reflecting the rise and growing importance of ICT standards developed by certain global fora and consortia". Some fora/consortia, of course, are distorted, influenced by single vendors, have poor track record, and need constant vigilance, but they are the minority. Therefore, the recognition needs to be accompanied by eligibility criteria focused on openness. Will the EU reform its ICT standardization by the end of 2010? Possibly, and only if DG Enterprise takes on board that Information and Communications Technologies (ICTs) have driven half of the productivity growth in Europe over the past 15 years, a prominent fact in the EU's excellent Digital Competitiveness report 2010 published on Monday 17 May. It is ok to single out the ICT sector. It simply is the most important sector right now as it fuels growth in all other sectors. Let's not wait for the entire standardization package which may take another few years. Europe does not have time. The Digital Agenda is an umbrella strategy with deliveries from a host of actors across the Commission. For instance, the EU promises to issue "guidance on transparent ex-ante disclosure rules for essential intellectual property rights and licensing terms and conditions in the context of standard setting", by 2011 in the Horisontal Guidelines now out for public consultation by DG COMP and to some extent by DG ENTR's standardization policy reform. This is important. The EU will issue procurement guidance as interoperability frameworks are put into practice. This is a joint responsibility of several DGs, and is likely to suffer coordination problems, controversy and delays. We have seen plenty of the latter already and I have commented on the Commission's own interoperability elsewhere, with mixed luck. :( Yesterday, I watched the cartoonesque Korean western film The Good, the Bad and the Weird. In the movie (and I meant in the movie only), a bandit, a thief, and a bounty hunter, all excellent at whatever they do, fight for a treasure map. Whether that is a good analogy for the situation within the Commission, others are better judges of than I. However, as a movie fanatic, I still await the final shoot-out, and, as in the film, the only certainty is that "life is about chasing and being chased". The missed opportunity (in this case not following up the push from Member States to better define open standards based interoperability) is a casualty of the chaos ensued in the European Wild West (and I mean that in the most endearing sense, and my excuses beforehand to actors who possibly justifiably cannot bear being compared to fictional movie characters). Instead of exposing the ongoing fight, the EU opted for the legalistic use of the term "standards" throughout the document. This is a term that--to the EU-- excludes most standards used by the IT industry world wide. So, while it, for a moment, meant "weapon down", it will not lead to lasting peace. The Digital Agenda calls for the Member States to "Implement commitments on interoperability and standards in the Malmö and Granada Declarations by 2013". This is a far cry from the actual Ministerial Declarations which called upon the Commission to help them with this implementation by recognizing and further defining open standards based interoperability. Unless there is more forthcoming from the Commission, the market's judgement will be: you simply fall short. Generally, I think the EU focus now should be "from policy to practice" and the Digital Agenda does indeed stop short of tackling some highly practical issues. There is need for progress beyond the Digital Agenda. Here are some suggestions that would help Europe re-take global leadership on openness, public sector reform, and economic growth: A strong European software strategy centred around open standards based interoperability by 2011. An ambitious new eCommission strategy for 2011-15 focused on migration to open standards by 2015. Aligning the IT portfolio across the Commission into one Digital Agenda DG by 2012. Focusing all best practice exchange in eGovernment on one social networking site, epractice.eu (full disclosure: I had a role in getting that site up and running) Prioritizing public sector needs in global standardization over European standardization by 2014.

    Read the article

  • How to build a Singleton-like dependency injector replacement (Php)

    - by Erparom
    I know out there are a lot of excelent containers, even frameworks almost entirely DI based with good strong IoC classes. However, this doesn't help me to "define" a new pattern. (This is Php code but understandable to anyone) Supose we have: //Declares the singleton class bookSingleton { private $author; private static $bookInstance; private static $isLoaned = FALSE; //The private constructor private function __constructor() { $this->author = "Onecrappy Writer Ofcheap Novels"; } //Sets the global isLoaned state and also gets self instance public static function loanBook() { if (self::$isLoaned === FALSE) { //Book already taken, so return false return FALSE; } else { //Ok, not loaned, lets instantiate (if needed and loan) if (!isset(self::$bookInstance)) { self::$bookInstance = new BookSingleton(); } self::$isLoaned = TRUE; } } //Return loaned state to false, so another book reader can take the book public function returnBook() { $self::$isLoaned = FALSE; } public function getAuthor() { return $this->author; } } Then we get the singelton consumtion class: //Consumes the Singleton class BookBorrower() { private $borrowedBook; private $haveBookState; public function __construct() { this->haveBookState = FALSE; } //Use the singelton-pattern behavior public function borrowBook() { $this->borrowedBook = BookSingleton::loanBook(); //Check if was successfully borrowed if (!this->borrowedBook) { $this->haveBookState = FALSE; } else { $this->haveBookState = TRUE; } } public function returnBook() { $this->borrowedBook->returnBook(); $this->haveBookState = FALSE; } public function getBook() { if ($this->haveBookState) { return "The book is loaned, the author is" . $this->borrowedbook->getAuthor(); } else { return "I don't have the book, perhaps someone else took it"; } } } At last, we got a client, to test the behavior function __autoload($class) { require_once $class . '.php'; } function write ($whatever,$breaks) { for($break = 0;$break<$breaks;$break++) { $whatever .= "\n"; } echo nl2br($whatever); } write("Begin Singleton test", 2); $borrowerJuan = new BookBorrower(); $borrowerPedro = new BookBorrower(); write("Juan asks for the book", 1); $borrowerJuan->borrowBook(); write("Book Borrowed? ", 1); write($borrowerJuan->getAuthorAndTitle(),2); write("Pedro asks for the book", 1); $borrowerPedro->borrowBook(); write("Book Borrowed? ", 1); write($borrowerPedro->getAuthorAndTitle(),2); write("Juan returns the book", 1); $borrowerJuan->returnBook(); write("Returned Book Juan? ", 1); write($borrowerJuan->getAuthorAndTitle(),2); write("Pedro asks again for the book", 1); $borrowerPedro->borrowBook(); write("Book Borrowed? ", 1); write($borrowerPedro->getAuthorAndTitle(),2); This will end up in the expected behavior: Begin Singleton test Juan asks for the book Book Borrowed? The book is loaned, the author is = Onecrappy Writer Ofcheap Novels Pedro asks for the book Book Borrowed? I don't have the book, perhaps someone else took it Juan returns the book Returned Book Juan? I don't have the book, perhaps someone else took it Pedro asks again for the book Book Borrowed? The book is loaned, the author is = Onecrappy Writer Ofcheap Novels So I want to make a pattern based on the DI technique able to do exactly the same, but without singleton pattern. As far as I'm aware, I KNOW I must inject the book inside "borrowBook" function instead of taking a static instance: public function borrowBook(BookNonSingleton $book) { if (isset($this->borrowedBook) || $book->isLoaned()) { $this->haveBook = FALSE; return FALSE; } else { $this->borrowedBook = $book; $this->haveBook = TRUE; return TRUE; } } And at the client, just handle the book: $borrowerJuan = new BookBorrower(); $borrowerJuan-borrowBook(new NonSingletonBook()); Etc... and so far so good, BUT... Im taking the responsability of "single instance" to the borrower, instead of keeping that responsability inside the NonSingletonBook, that since it has not anymore a private constructor, can be instantiated as many times... making instances on each call. So, What does my NonSingletonBook class MUST be in order to never allow borrowers to have this same book twice? (aka) keep the single instance. Because the dependency injector part of the code (borrower) does not solve me this AT ALL. Is it needed the container with an "asShared" method builder with static behavior? No way to encapsulate this functionallity into the Book itself? "Hey Im a book and I shouldn't be instantiated more than once, I'm unique"

    Read the article

  • Blog Buzz - Devoxx 2011

    - by Janice J. Heiss
    Some day I will make it to Devoxx – for now, I’m content to vicariously follow the blogs of attendees and pick up on what’s happening.  I’ve been doing more blog "fishing," looking for the best commentary on 2011 Devoxx. There’s plenty of food for thought – and the ideas are not half-baked.The bloggers are out in full, offering useful summaries and commentary on Devoxx goings-on.Constantin Partac, a Java developer and a member of Transylvania JUG, a community from Cluj-Napoca/Romania, offers an excellent summary of the Devoxx keynotes. Here’s a sample:“Oracle Opening Keynote and JDK 7, 8, and 9 Presentation•    Oracle is committed to Java and wants to provide support for it on any device.•    JSE 7 for Mac will be released next week.•    Oracle would like Java developers to be involved in JCP, to adopt a JSR and to attend local JUG meetings.•    JEE 7 will be released next year.•    JEE 7 is focused on cloud integration, some of the features are already implemented in glassfish 4 development branch.•    JSE 8 will be release in summer of 2013 due to “enterprise community request” as they can not keep the pace with an 18    month release cycle.•    The main features included in JSE8 are lambda support, project Jigsaw, new Date/Time API, project Coin++ and adding   support for sensors. JSE 9 probably will focus on some of these features:1.    self tuning JVM2.    improved native language integration3.    processing enhancement for big data4.    reification (adding runtime class type info for generic types)5.    unification of primitive and corresponding object classes6.    meta-object protocol in order to use type and methods define in other JVM languages7.    multi-tenancy8.    JVM resource management” Thanks Constantin! Ivan St. Ivanov, of SAP Labs Bulgaria, also commented on the keynotes with a different focus.  He summarizes Henrik Stahl’s look ahead to Java SE 8 and JavaFX 3.0; Cameron Purdy on Java EE and the cloud; celebrated Java Champion Josh Bloch on what’s good and bad about Java; Mark Reinhold’s quick look ahead to Java SE 9; and Brian Goetz on lambdas and default methods in Java SE 8. Here’s St. Ivanov’s account of Josh Bloch’s comments on the pluses of Java:“He started with the virtues of the platform. To name a few:    Tightly specified language primitives and evaluation order – int is always 32 bits and operations are executed always from left  to right, without compilers messing around    Dynamic linking – when you change a class, you need to recompile and rebuild just the jar that has it and not the whole application    Syntax  similarity with C/C++ – most existing developers at that time felt like at home    Object orientations – it was cool at that time as well as functional programming is today    It was statically typed language – helps in faster runtime, better IDE support, etc.    No operator overloading – well, I’m not sure why it is good. Scala has it for example and that’s why it is far better for defining DSLs. But I will not argue with Josh.”It’s worth checking out St. Ivanov’s summary of Bloch’s views on what’s not so great about Java as well. What's Coming in JAX-RS 2.0Marek Potociar, Principal Software Engineer at Oracle and currently specification lead of Java EE RESTful web services API (JAX-RS), blogged on his talk about what's coming in JAX-RS 2.0, scheduled for final release in mid-2012.  Here’s a taste:“Perhaps the most wanted addition to the JAX-RS is the Client API, that would complete the JAX-RS story, that is currently server-side only. In JAX-RS 2.0 we are adding a completely interface-based and fluent client API that blends nicely in with the existing fluent response builder pattern on the server-side. When we started with the client API, the first proposal contained around 30 classes. Thanks to the feedback from our Expert Group we managed to reduce the number of API classes to 14 (2 of them being exceptions)! The resulting is compact while at the same time we still managed to create an API that reflects the method invocation context flow (e.g. once you decide on the target URI and start setting headers on the request, your IDE will not try to offer you a URI setter in the code completion). This is a subtle but very important usability aspect of an API…” Obviously, Devoxx is a great Java conference, one that is hitting this year at a time when much is brewing in the platform and beginning to be anticipated.

    Read the article

  • People, Process & Engagement: WebCenter Partner Keste

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Within the WebCenter group here at Oracle, discussions about people, process and engagement cross over many vertical industries and products. Amidst our growing partner ecosystem, the community provides us insight into great customer use cases every day. Such is the case with our partner, Keste, who provides us a guest post on our blog today with an overview of their innovative solution for a customer in the transportation industry. Keste is an Oracle software solutions and development company headquartered in Dallas, Texas. As a Platinum member of the Oracle® PartnerNetwork, Keste designs, develops and deploys custom solutions that automate complex business processes. Seamless Customer Self-Service Experience in the Trucking Industry with Oracle WebCenter Portal  Keste, Oracle Platinum Partner Customer Overview Omnitracs, Inc., a Qualcomm company provides mobility solutions for trucking fleets to companies in the transportation industry. Omnitracs’ mobility services include basic communications such as text as well as advanced monitoring services such as GPS tracking, temperature tracking of perishable goods, load tracking and weighting distribution, and many others. Customer Business Needs Already the leading provider of mobility solutions for large trucking fleets, they chose to target smaller trucking fleets as new customers. However their existing high-touch customer support method would not be a cost effective or scalable method to manage and service these smaller customers. Omnitracs needed to provide several self-service features to make customer support more scalable while keeping customer satisfaction levels high and the costs manageable. The solution also had to be very intuitive and easy to use. The systems that Omnitracs sells to these trucking customers require professional installation and smaller customers need to track and schedule the installation. Information captured in Oracle eBusiness Suite needed to be readily available for new customers to track these purchases and delivery details. Omnitracs wanted a high impact User Interface to significantly improve customer experience with the ability to integrate with EBS, provisioning systems as well as CRM systems that were already implemented. Omnitracs also wanted to build an architecture platform that could potentially be extended to other Portals. Omnitracs’ stated goal was to deliver an “eBay-like” or “Amazon-like” experience for all of their customers so that they could reach a much broader market beyond their large company customer base. Solution Overview In order to manage the increased complexity, the growing support needs of global customers and improve overall product time-to-market in a cost-effective manner, IT began to deliver a self-service model. This self service model not only transformed numerous business processes but is also allowing the business to keep up with the growing demands of the (internal and external) customers. This solution was a customer service Portal that provided self service capabilities for large and small customers alike for Activation of mobility products, managing add-on applications for the devices (much like the Apple App Store), transferring services when trucks are sold to other companies as well as deactivation all without the involvement of a call service agent or sending multiple emails to different Omnitracs contacts. This is a conceptual view of the Customer Portal showing the details of the components that make up the solution. 12.00 The portal application for transactions was entirely built using ADF 11g R2. Omnitracs’ business had a pressing requirement to have a portal available 24/7 for its customers. Since there were interactions with EBS in the back-end, the downtimes on the EBS would negate this availability. Omnitracs devised a decoupling strategy at the database side for the EBS data. The decoupling of the database was done using Oracle Data Guard and completely insulated the solution from any eBusiness Suite down time. The customer has no knowledge whether eBS is running or not. Here are two sample screenshots of the portal application built in Oracle ADF. Customer Benefits The Customer Portal not only provided the scalability to grow the business but also provided the seamless integration with other disparate applications. Some of the key benefits are: Improved Customer Experience: With a modern look and feel and a Portal that has the aspects of an App Store, the customer experience was significantly improved. Page response times went from several seconds to sub-second for all of the pages. Enabled new product launches: After successfully dominating the large fleet market, Omnitracs now has a scalable solution to sell and manage smaller fleet customers giving them a huge advantage over their nearest competitors. Dozens of new customers have been acquired via this portal through an onboarding process that now takes minutes Seamless Integrations Improves Customer Support: ADF 11gR2 allowed Omnitracs to bring a diverse list of applications into one integrated solution. This provided a seamless experience for customers to route them from Marketing focused application to a customer-oriented portal. Internally, it also allowed Sales Representatives to have an integrated flow for taking a prospect through the various steps to onboard them as a customer. Key integrations included: Unity Core Salesforce.com Merchant e-Solution for credit card Custom Omnitracs Applications like CUPS and AUTO Security utilizing OID and OVD Back end integration with EBS (Data Guard) and iQ Database Business Impact Significant business impacts were realized through the launch of customer portal. It not only allows the business to push through in underserved segments, but also reduces the time it needs to spend on customer support—allowing the business to focus more on sales and identifying the market for new products. Some of the Immediate Benefits are The entire onboarding process is now completely automated and now completes in minutes. This represents an 85% productivity improvement over their previous processes. And it was 160 times faster! With the success of this self-service solution, the business is now targeting about 3X customer growth in the next five years. This represents a tripling of their overall customer base and significant downstream revenue for the ongoing services. 90%+ improvement of customer onboarding and management process by utilizing, single sign on integration using OID/OAM solution, performance improvements and new self-service functionality Unified login for all Customers, Partners and Internal Users enables login to a common portal and seamless access to all other integrated applications targeted at the respective audience Significantly improved customer experience with a better look and feel with a more user experience focused Portal screens. Helped sales of the new product by having an easy way of ordering and activating the product. Data Guard helped increase availability of the Portal to 99%+ and make it independent of EBS downtime. This gave customers the feel of high availability of the portal application. Some of the anticipated longer term Benefits are: Platform that can be leveraged to launch any new product introduction and enable all product teams to reach new customers and new markets Easy integration with content management to allow business owners more control of the product catalog Overall reduced TCO with standardization of the Oracle platform Managed IT support cost savings through optimization of technology skills needed to support and modify this solution ------------------------------------------------------------ 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif";}

    Read the article

  • Personal Technology – Laptop Screen Blank – No Post – No BIOS – No Boot

    - by Pinal Dave
    If your laptop Screen is Blank and there is no POST, BIOS or boot, you can follow the steps mentioned here and there are chances that it will work if there is no hardware failure inside. Step 1: Remove the power cord from the laptop Step 2: Remove the battery from the laptop Step 3: Hold power button (keep it pressed) for almost 60 seconds Step 4: Plug power back in laptop Step 5: Start computer and it should just start normally. Step 6: Now shut down Step 7: Insert the battery back in the laptop Step 8: Start laptop again and it should work Note 1: If your laptop does not work after inserting back the memory. Remove the memory and repeat above process. Do not insert the battery back as it is malfunctioning. Note 2: If your screen is faulty or have issues with your hardware (motherboard, screen or anything else) this method will not fix your computer. Those, who care about how I come up with this not SQL related blog post, here is the very funny true story. If you are a married man, you will know what I am going to describe next. May be you have faced the same situation or at least you feel and understand my situation. My wife’s computer suddenly stops working when she was searching for my daughter’s mathematics worksheets online. While the fatal accident happened with my wife’s computer (which was my loyal computer for over 4 years before she got it), I was working in my home office, fixing a high priority issue (live order’s database was corrupted) with one of the largest eCommerce websites.  While I was working on production server where I was fixing database corruption, my wife ran to my home office. Here is how the conversation went: Wife: This computer does not work. I: Restart it. Wife: It does not start. I: What did you do with it? Wife: Nothing, it just stopped working. I: Okey, I will look into it later, working on the very urgent issue. Wife: I was printing my daughter’s worksheet. I: Hm.. Okey. Wife: It was the mathematics worksheet, which you promised you will teach but you never get around to do it, so I am doing it myself. I: Thanks. I appreciate it. I am very busy with this issue as million dollar transaction are not happening as the database got corrupted and … Wife: So what … umm… You mean to say that you care about this customer more than your daughter. You know she got A+ in every other class but in mathematics she got only A. She missed that extra credit question. I: She is only 4, it is okay. Wife: She is 4.5 years old not 4. So you are not going to fix this computer which does not start at all. I think our daughter next time will even get lower grades as her dad is busy fixing something. I: Alright, I give up bring me that computer. Our daughter who was listening everything so far she finally decided to speak up. Daughter: Dad, it is a laptop not computer. I: Yes, sweety get that laptop here and your dad is going to fix the this small issue of million dollar issue later on. I decided to pay attention to my wife’s computer. She was right. No matter what I do, it will not boot up, it will not start, no BIOS, no POST screen. The computer starts for a second but nothing comes up on the screen. The light indicating hard drive comes up for a second and goes off. Nothing happens. I removed every single USB drive from the laptop but it still would not start. It was indeed no fun for me. Finally I remember my days when I was not married and used to study in University of Southern California, Los Angeles. I remembered that I used to have very old second (or maybe third or fourth) hand computer with me. In polite words, I had pre-owned computer and it used to face very similar issues again and again. I had small routine I used to follow to fix my old computer and I had decided to follow the same steps again with this computer. Step 1: Remove the power cord from the laptop Step 2: Remove the battery from the laptop Step 3: Hold power button (keep it pressed) for almost 60 seconds Step 4: Plug power back in laptop Step 5: Start computer and it should just start normally. Step 6: Now shut down Step 7: Insert the battery back in the laptop Step 8: Start laptop again and it should work Note 1: If your laptop does not work after inserting back the memory. Remove the memory and repeat above process. Do not insert the battery back as it is malfunctioning. Note 2: If your screen is faulty or have issues with your hardware (motherboard, screen or anything else) this method will not fix your computer. Once I followed above process, her computer worked. I was very delighted, that now I can go back to solving the problem where millions of transactions were waiting as I was fixing corrupted database and it the current state of the database was in emergency mode. Once I fixed the computer, I looked at my wife and asked. I: Well, now this laptop is back online, can I get guaranteed that she will get A+ in mathematics in this week’s quiz? Wife: Sure, I promise. I: Fantastic. After saying that I started to look at my database corruption and my wife interrupted me again. Wife: Btw, I forgot to tell you. Our daughter had got A in mathematics last week but she had another quiz today and she already have received A+ there. I kept my promise. I looked at her and she started to walk outside room, before I say anything my phone rang. DBA from eCommerce company had called me, as he was wondering why there is no activity from my side in last 10 minutes. DBA: Hey bud, are you still connected. I see um… no activity in last 10 minutes. I: Oh, well, I was just saving the world. I am back now. After two hours I had fixed the database corruption and everything was normal. I was outsmarted by my wife but honestly I still respect and love her the same as she is the one who spends countless hours with our daughter so she does not miss me and I can continue writing blogs and keep on doing technology evangelism. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Humor, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Unable to boot Ubuntu 13.10 (nVidia GTX 770m and Intel HD 4600)

    - by Raziel Gonzalez
    Ever since I bought this laptop I've been trying to install Ubuntu on it. It came with W8 preinstalled. Up to this point, I've been able to boot in UEFI mode with a black screen. I can tell it's trying to use the nVidia card (there's a led on the computer, depending on the color you can tell which GPU is using) and if I press crtl+alt+F1 I can go to console mode. Taking this advantage I tried to install bumblebee and after a successful install the led that indicates which GPU is being used change, indicating that it switched to the Intel HD 4600 graphics. After the installation I tried to initiate the graphic interface (startx) with no success. Xorg.0.log shows the error: [ 3706.779] X.Org X Server 1.14.3 Release Date: 2013-09-12 [ 3706.782] X Protocol Version 11, Revision 0 [ 3706.783] Build Operating System: Linux 3.2.0-37-generic x86_64 Ubuntu [ 3706.783] Current Operating System: Linux ubuntu 3.11.0-12-generic #19-Ubuntu SMP Wed Oct 9 16:20:46 UTC 2013 x86_64 [ 3706.783] Kernel command line: BOOT_IMAGE=/casper/vmlinuz.efi file=/cdrom/preseed/ubuntu.seed boot=casper nomodeset -- [ 3706.785] Build Date: 15 October 2013 09:23:37AM [ 3706.786] xorg-server 2:1.14.3-3ubuntu2 (For technical support please see http://www.ubuntu.com/support) [ 3706.786] Current version of pixman: 0.30.2 [ 3706.788] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 3706.788] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 3706.791] (==) Log file: "/var/log/Xorg.0.log", Time: Sat Nov 2 12:28:52 2013 [ 3706.792] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 3706.792] (==) No Layout section. Using the first Screen section. [ 3706.792] (==) No screen section available. Using defaults. [ 3706.792] (**) |-->Screen "Default Screen Section" (0) [ 3706.792] (**) | |-->Monitor "<default monitor>" [ 3706.792] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 3706.792] (==) Automatically adding devices [ 3706.792] (==) Automatically enabling devices [ 3706.792] (==) Automatically adding GPU devices [ 3706.792] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/Type1, built-ins [ 3706.792] (==) ModulePath set to "/usr/lib/x86_64-linux-gnu/xorg/extra-modules,/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules" [ 3706.792] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 3706.792] (II) Loader magic: 0x7ff680918d20 [ 3706.792] (II) Module ABI versions: [ 3706.792] X.Org ANSI C Emulation: 0.4 [ 3706.792] X.Org Video Driver: 14.1 [ 3706.792] X.Org XInput driver : 19.1 [ 3706.792] X.Org Server Extension : 7.0 [ 3706.793] (--) PCI:*(0:0:2:0) 8086:0416:1462:10e8 rev 6, Mem @ 0xf7400000/4194304, 0xb0000000/268435456, I/O @ 0x0000f000/64 [ 3706.793] (II) Open ACPI successful (/var/run/acpid.socket) [ 3706.794] Initializing built-in extension Generic Event Extension [ 3706.795] Initializing built-in extension SHAPE [ 3706.796] Initializing built-in extension MIT-SHM [ 3706.797] Initializing built-in extension XInputExtension [ 3706.797] Initializing built-in extension XTEST [ 3706.798] Initializing built-in extension BIG-REQUESTS [ 3706.799] Initializing built-in extension SYNC [ 3706.799] Initializing built-in extension XKEYBOARD [ 3706.800] Initializing built-in extension XC-MISC [ 3706.801] Initializing built-in extension SECURITY [ 3706.802] Initializing built-in extension XINERAMA [ 3706.802] Initializing built-in extension XFIXES [ 3706.803] Initializing built-in extension RENDER [ 3706.804] Initializing built-in extension RANDR [ 3706.804] Initializing built-in extension COMPOSITE [ 3706.805] Initializing built-in extension DAMAGE [ 3706.806] Initializing built-in extension MIT-SCREEN-SAVER [ 3706.806] Initializing built-in extension DOUBLE-BUFFER [ 3706.807] Initializing built-in extension RECORD [ 3706.807] Initializing built-in extension DPMS [ 3706.808] Initializing built-in extension X-Resource [ 3706.809] Initializing built-in extension XVideo [ 3706.809] Initializing built-in extension XVideo-MotionCompensation [ 3706.810] Initializing built-in extension SELinux [ 3706.811] Initializing built-in extension XFree86-VidModeExtension [ 3706.811] Initializing built-in extension XFree86-DGA [ 3706.812] Initializing built-in extension XFree86-DRI [ 3706.812] Initializing built-in extension DRI2 [ 3706.812] (II) "glx" will be loaded by default. [ 3706.812] (WW) "xmir" is not to be loaded by default. Skipping. [ 3706.812] (II) LoadModule: "dri2" [ 3706.812] (II) Module "dri2" already built-in [ 3706.812] (II) LoadModule: "glamoregl" [ 3706.813] (II) Loading /usr/lib/xorg/modules/libglamoregl.so [ 3706.813] (II) Module glamoregl: vendor="X.Org Foundation" [ 3706.813] compiled for 1.14.2.901, module version = 0.5.1 [ 3706.813] ABI class: X.Org ANSI C Emulation, version 0.4 [ 3706.813] (II) LoadModule: "glx" [ 3706.813] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 3706.813] (II) Module glx: vendor="X.Org Foundation" [ 3706.813] compiled for 1.14.3, module version = 1.0.0 [ 3706.813] ABI class: X.Org Server Extension, version 7.0 [ 3706.813] (==) AIGLX enabled [ 3706.814] Loading extension GLX [ 3706.814] (==) Matched intel as autoconfigured driver 0 [ 3706.814] (==) Matched vesa as autoconfigured driver 1 [ 3706.814] (==) Matched modesetting as autoconfigured driver 2 [ 3706.814] (==) Matched fbdev as autoconfigured driver 3 [ 3706.814] (==) Assigned the driver to the xf86ConfigLayout [ 3706.814] (II) LoadModule: "intel" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/intel_drv.so [ 3706.814] (II) Module intel: vendor="X.Org Foundation" [ 3706.814] compiled for 1.14.3, module version = 2.99.904 [ 3706.814] Module class: X.Org Video Driver [ 3706.814] ABI class: X.Org Video Driver, version 14.1 [ 3706.814] (II) LoadModule: "vesa" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 3706.814] (II) Module vesa: vendor="X.Org Foundation" [ 3706.814] compiled for 1.14.1, module version = 2.3.2 [ 3706.814] Module class: X.Org Video Driver [ 3706.814] ABI class: X.Org Video Driver, version 14.1 [ 3706.814] (II) LoadModule: "modesetting" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 3706.814] (II) Module modesetting: vendor="X.Org Foundation" [ 3706.814] compiled for 1.14.1, module version = 0.8.0 [ 3706.814] Module class: X.Org Video Driver [ 3706.814] ABI class: X.Org Video Driver, version 14.1 [ 3706.814] (II) LoadModule: "fbdev" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 3706.815] (II) Module fbdev: vendor="X.Org Foundation" [ 3706.815] compiled for 1.14.1, module version = 0.4.3 [ 3706.815] Module class: X.Org Video Driver [ 3706.815] ABI class: X.Org Video Driver, version 14.1 [ 3706.815] (II) intel: Driver for Intel(R) Integrated Graphics Chipsets: i810, i810-dc100, i810e, i815, i830M, 845G, 854, 852GM/855GM, 865G, 915G, E7221 (i915), 915GM, 945G, 945GM, 945GME, Pineview GM, Pineview G, 965G, G35, 965Q, 946GZ, 965GM, 965GME/GLE, G33, Q35, Q33, GM45, 4 Series, G45/G43, Q45/Q43, G41, B43, HD Graphics, HD Graphics 2000, HD Graphics 3000, HD Graphics 2500, HD Graphics 4000, HD Graphics P4000, HD Graphics 4600, HD Graphics 5000, HD Graphics P4600/P4700, Iris(TM) Graphics 5100, HD Graphics 4400, HD Graphics 4200, Iris(TM) Pro Graphics 5200 [ 3706.815] (II) VESA: driver for VESA chipsets: vesa [ 3706.815] (II) modesetting: Driver for Modesetting Kernel Drivers: kms [ 3706.815] (II) FBDEV: driver for framebuffer: fbdev [ 3706.815] (--) using VT number 7 [ 3706.819] (WW) Falling back to old probe method for modesetting [ 3706.819] (EE) open /dev/dri/card0: No such file or directory [ 3706.819] (WW) Falling back to old probe method for fbdev [ 3706.819] (II) Loading sub module "fbdevhw" [ 3706.819] (II) LoadModule: "fbdevhw" [ 3706.819] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so [ 3706.819] (II) Module fbdevhw: vendor="X.Org Foundation" [ 3706.819] compiled for 1.14.3, module version = 0.0.2 [ 3706.819] ABI class: X.Org Video Driver, version 14.1 [ 3706.819] (II) Loading sub module "vbe" [ 3706.819] (II) LoadModule: "vbe" [ 3706.819] (II) Loading /usr/lib/xorg/modules/libvbe.so [ 3706.819] (II) Module vbe: vendor="X.Org Foundation" [ 3706.819] compiled for 1.14.3, module version = 1.1.0 [ 3706.819] ABI class: X.Org Video Driver, version 14.1 [ 3706.819] (II) Loading sub module "int10" [ 3706.819] (II) LoadModule: "int10" [ 3706.819] (II) Loading /usr/lib/xorg/modules/libint10.so [ 3706.819] (II) Module int10: vendor="X.Org Foundation" [ 3706.819] compiled for 1.14.3, module version = 1.0.0 [ 3706.819] ABI class: X.Org Video Driver, version 14.1 [ 3706.819] (II) VESA(0): initializing int10 [ 3706.820] (EE) VESA(0): V_BIOS address 0x0 out of range [ 3706.820] (II) UnloadModule: "vesa" [ 3706.820] (II) UnloadSubModule: "int10" [ 3706.820] (II) Unloading int10 [ 3706.820] (II) UnloadSubModule: "vbe" [ 3706.820] (II) Unloading vbe [ 3706.820] (EE) Screen(s) found, but none have a usable configuration. [ 3706.820] (EE) Fatal server error: [ 3706.820] (EE) no screens found(EE) [ 3706.820] (EE) Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 3706.820] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information. [ 3706.820] (EE) [ 3706.827] (EE) Server terminated with error (1). Closing log file. I also saved the dsmeg output to see if it can be of any help. In order to be able to get to this stage I had to boot with nomodeset option and removed quiet and splash. Anyone got this same error? Any guidance? I've tried other linux distros and so far the only one that is able to boot is Opensuse 12.3 without any issues (but only when I switch to legacy mode instead of UEFI).

    Read the article

  • Defining Your Online Segmentation and Targeting Strategy

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A lot of times, companies will put online segmentation and targeting on the back burner because they don’t know where to start. Often, I’ve heard web managers say that their segments aren’t well understood yet, so they can’t really deliver personalized online experiences that are meaningful. This lack of complete understanding means that they don't really bother to try. But, I don’t think you necessarily need to have an elaborate segmentation and targeting strategy already in place to start delivering a more relevant online customer experience. Sometimes it helps to think of how segmentation and targeting might solve some of the challenges your sites visitors are currently experiencing on your web presence, rather than doing nothing and waiting until a fully baked segmentation strategy lands in your inbox.  For example, perhaps you have a broad and varied service offering that makes it difficult for site visitors to easily find the solutions that are most relevant for them.  How can segmentation and targeting help solve this problem?  Or maybe it’s like the airline I described in Monday’s post where the special deals featured on the home page are only relevant to site visitors from a couple of cities.  Couldn’t segmentation and targeting help them to highlight offers on their home page that are relevant to a larger share of their site visitors? Your early segmentation and targeting efforts do not need to be complicated.  There are simple ways to start delivering a more relevant online customer experience, even if you’re dealing with anonymous site visitors.  These include targeting content to site visitors based on: Referral: Deliver targeted content to your site visitors that is based on where they came from or the search term they used to find your site Behavior:  Deliver content to your site visitors that is related or similar to content they’ve clicked on already Location:  Deliver content your site visitors that is most relevant for their geographic location (this would solve that pesky airline home page problem described above) So as you can see, there really are some very simple ways in which you can start improving your online customer experience using very basic segmentation and targeting methods.  One thing to keep in mind as you start to define you segmentation and targeting strategy is that there are many different types of attributes or combinations of attributes upon which you can base your segmentation and targeting strategy.  In addition to referral, behavior and location, other attributes that you should consider are: Profile Information:  What profile information do you know about this customer already?  Perhaps they provided some information on their interests and preferences when they first registered with your site. Time:  What time is it and how does that impact what my site visitors are looking for or trying to do? Demographics: What are my site visitors’ ages, incomes or ethnicities? Which attributes you select to include in your segmentation strategy will depend on your unique business needs and objectives.  Attributes such as behavior or referral may not be the most important targeting criteria depending on your situation. For example, if you’re a newspaper you might know that certain visitors are sports fans based on their profile information.  You can create a segment for sports fans and target sports related content to that segment of your readership online.  Or perhaps, a reader is browsing stories that are related to politics; you can use that visitor’s behavior to assign him or her to a segment for those interested in politics. From there you can recommend more stories to that visitor based on their interest in politics. For an airline, the visitor’s location may be a more important attribute. By detecting the visitor’s location, you can assign them to an appropriate segment and then target special flights and offers to them based on their likely departure airport. As you can see, there are many practical ways that you can start improving the experience your customers receive on your web presence using fairly basic segmentation and targeting techniques. If you want to learn more about segmentation and targeting using Oracle’s web experience management solution, check out this helpful video that demonstrates these powerful capabilities in Oracle WebCenter Sites. ***** On Demand Webcast Featuring Brian Solis of Altimeter Group Trends such as the mobile web, social media, gamification and real-time are changing customer behavior and expectations. In this new environment, many businesses will struggle. Some will fall by the wayside, while others learn to adapt and thrive. Watch this on demand webcast with Altimeter Group digital analyst and author, Brian Solis, and discover what your organization needs to know about how to compete in the new era of Digital Darwinism. View now.

    Read the article

  • How to ensure custom serverListener events fires before action events

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Using JavaScript in ADF Faces you can queue custom events defined by an af:serverListener tag. If the custom event however is queued from an af:clientListener on a command component, then the command component's action and action listener methods fire before the queued custom event. If you have a use case, for example in combination with client side integration of 3rd party technologies like HTML, Applets or similar, then you want to change the order of execution. The way to change the execution order is to invoke the command item action from the client event method that handles the custom event propagated by the af:serverListener tag. The following four steps ensure your successful doing this 1.       Call cancel() on the event object passed to the client JavaScript function invoked by the af:clientListener tag 2.       Call the custom event as an immediate action by setting the last argument in the custom event call to true function invokeCustomEvent(evt){   evt.cancel();          var custEvent = new AdfCustomEvent(                         evt.getSource(),                         "mycustomevent",                                                                                                                    {message:"Hello World"},                         true);    custEvent.queue(); } 3.       When handling the custom event on the server, lookup the command item, for example a button, to queue its action event. This way you simulate a user clicking the button. Use the following code ActionEvent event = new ActionEvent(component); event.setPhaseId(PhaseId.INVOKE_APPLICATION); event.queue(); The component reference needs to be changed with the handle to the command item which action method you want to execute. 4.       If the command component has behavior tags, like af:fileDownloadActionListener, or af:setPropertyListener, defined, then these are also executed when the action event is queued. However, behavior tags, like the file download action listener, may require a full page refresh to be issued to work, in which case the custom event cannot be issued as a partial refresh. File download action tag: http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_fileDownloadActionListener.html " Since file downloads must be processed with an ordinary request - not XMLHttp AJAX requests - this tag forces partialSubmit to be false on the parent component, if it supports that attribute." To issue a custom event as a non-partial submit, the previously shown sample code would need to be changed as shown below function invokeCustomEvent(evt){   evt.cancel();          var custEvent = new AdfCustomEvent(                         evt.getSource(),                         "mycustomevent",                                                                                                                    {message:"Hello World"},                         true);    custEvent.queue(false); } To learn more about custom events and the af:serverListener, please refer to the tag documentation: http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_serverListener.html

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • Do you know your ADF "grace period?"

    - by Chris Muir
    What does the term "support" mean to you in context of vendors such as Oracle giving your organization support with our products? Over the last few weeks I'm taken a straw poll to discuss this very question with customers, and I've received a wide array of answers much to my surprise (which I've paraphrased): "Support means my staff can access dedicated resources to assist them solve problems" "Support means I can call Oracle at anytime to request assistance" "Support means we can expect fixes and patches to bugs in Oracle software" The last expectation is the one I'd like to focus on in this post, keep it in mind while reading this blog. From Oracle's perspective as we're in the business of support, we in fact offer numerous services which are captured on the table in the following page. As the text under the table indicates, you should consult the relevant Oracle Lifetime Support brochures to understand the length of time Oracle will support Oracle products. As I'm a product manager for ADF that sits under the FMW tree of Oracle products, let's consider ADF in particular. The FMW brochure is found here. On page 8 and 9 you'll see the current "Application Development Framework 11gR1 (11.1.1.x)" and "Application Development Framework 11gR2 (11.1.2)" releases are supported out to 2017 for Extended Support. This timeframe is pretty standard for Oracle's current released products, though as new releases roll in we should see those dates extended. On page 8 of the PDF note the comment at the end of this page that refers to the Oracle Support document 209768.1: For more-detailed information on bug fix and patch release policies, please refer to the “Error Correction Support Policy” on MyOracle Support. This policy document is important as it introduces Oracle's Error Correction Support Policy which addresses "patches and fixes". You can find it attached the previous Oracle Support document 209768.1. Broadly speaking while Oracle does provide "generalized support" up to 2017 for ADF, the Error Correction Support Policy dictates when Oracle will provide "patches and fixes" for Oracle software, and this is where the concept of the "grace period" comes in. As Oracle releases different versions of Oracle software, say 11.1.1.4.0, you are fully supported for patches and fixes for that specific version. However when we release the next version, say 11.1.1.5.0, Oracle provides at minimum of 3 months to a maximum of 1 year "grace period" where we'll continue to provide patches and fixes for the previous version. This gives you time to move from 11.1.1.4.0 to 11.1.1.5.0 without being unsupported for patches and fixes. The last paragraph does generalize as I've attempted to highlight the concept of the grace period rather than the specific dates for any version. For specific ADF and FMW versions and their respective grace periods and when they terminated you must visit Oracle Support Note 1290894.1. I'd like to include a screenshot here of the relevant table from that Oracle Support Note but as it is will be frequently updated it's better I force you to visit that note. Be careful to heed the comment in the note: According to policy, the Grace Period has passed because a newer Patch Set has been released for more than a year. Its important to note that the Lifetime Support Policy and Error Correction Support Policy documents are the single source of truth, subject to change, and will provide exceptions when required. This My Oracle Support document is providing a summary of the Grace Period dates and time lines for planning purposes. So remember to return to the policy document for all definitions, note 1290894.1 is a summary only and not guaranteed to be up to date or correct. A last point from Oracle's perspective. Why doesn't Oracle provide patches and fixes for all releases as long as they're supported? Amongst other reasons, it's a matter of practicality. Consider JDeveloper 10.1.3 released in 2005. JDeveloper 10.1.3 is still currently supported to 2017, but since that version was released there has been just under 20 newer releases of JDeveloper. Now multiply that across all Oracle's products and imagine the number of releases Oracle would have to provide fixes and patches for, and maintain environments to test them, build them, staff to write them and more, it's simple beyond the capabilities of even a large software vendor like Oracle. So the "grace period" restricts that patches and fixes window to something manageable. In conclusion does the concept of the "grace period" matter to you? If you define support as "getting assistance from Oracle" then maybe not. But if patches and fixes are important to you, then you need to understand the "grace period" and operate within the bounds of Oracle's Error Correction Support Policy. Disclaimer: this blog post was written July 2012. Oracle Support policies do change from time to time so the emphasis is on you to double check the facts presented in this blog.

    Read the article

  • Cowboy Agile?

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. I’ve often heard similar phrases around Scrum that clue me in to someone who doesn’t understand Scrum.  The phrases go something like this: “We don’t do Agile because the idea of letting people just do whatever they want is wrong.  We believe in a more structured approach.” (i.e. Work is Prison, and I’m the Warden!) “I love Agile.  Agile lets us do whatever we want!” (Cowboy Agile?) “We’re Agile, but we use a process that I’ve created.” (Cowboy Agile?) All of those phrases have one thing in common:  The assumption that Agile, and I mean Scrum, lets you do whatever you want.  This is simply not true. Executing Scrum properly requires more dedication, rigor, and diligence than happens in most traditional development methods. Scrum and Waterfall Compared Since Scrum and Waterfall are two of the most commonly used methodologies, a little bit of contrasting and comparing is in order. Waterfall Scrum A project manager defines all tasks and then manages the tasks that team members are working on. The team members define the tasks and estimates of the stories for the current iteration.  Any team member may work on any task in the iteration. Usually only a few milestones that need to be met, the milestones are measured in months, and these milestones are expected to be missed.  Little work is ever done to improve estimates and poor estimators can hide behind high estimates. Stories must be delivered every iteration, milestones are measured in hours, and the team is expected to figure out why their estimates were wrong, even when they were under.  Repeated misses can get the entire team fired. Partially completed work is normal. Partially completed work doesn’t count. Nobody knows the task you’re working on. Everyone knows what you’re working on, whether or not you’re making progress and how much longer you think its going to take, in hours. Little requirement to show working code.  Prototypes are ok. Working code must be shown each iteration.  No smoke and mirrors allowed.  Testing is done in lengthy cycles at the end of development.  Developers aren’t held accountable. Testing is part of the team.  If the testers don’t accept the story as complete, the team can’t count it.  Complete means that the story’s functionality works as designed.  The team can’t have any open defects on the story. Velocity is rarely truly measured and difficult to evaluate. Velocity is integral to the process and can be seen at a glance and everyone in the company knows what it is. A business analyst writes requirements.  Designers mock up screens.  Developers hide behind “I did it just like the spec doc told me to and made the screen exactly like the picture” Developers are expected to collaborate in real time.  If a design is bad or lacks needed details, the developers are required to get it right in the iteration, because all software must be functional.  Designers and Business Analysts are part of the team and must do their work in iterations slightly ahead of the developers. Upper Management is often surprised.  “You told me things were going well two months ago!” Management receives updates at the end of every iteration showing them exactly what the team did and how that compares to what' is remaining in the backlog.  Managers know every iteration what their money is buying. Status meetings are rare or don’t occur.  Email is a primary form of communication. Teams coordinate every single day with each other and use other high bandwidth communication channels to make sure they’re making progress.  Email is used only as a last resort.  Instead, team members stand up, walk to each other, and talk, face to face.  If that’s not possible, they pick up the phone. IF someone asks what happened, its at the end of a lengthy development cycle measured in months, and nobody really knows why it happened. Someone asks what happened every iteration.  The team talks about what happened, and then adapts to make sure that what happened either never happens again or happens every time.   That’s probably enough for now.  As you can see, a lot is required of Scrum teams! One of the key differences in Scrum is that the burden for many activities is shifted to a group of people who share responsibility, instead of a single person having responsibility.  This is a very good thing, since small groups usually come up with better and more insightful work than single individuals.  This shift also results in better velocity.  Team members can take vacations and the rest of the team simply picks up the slack.  With Waterfall, if a key team member takes a vacation, delays can ensue. Scrum requires much more out of every team member and as a result, Scrum teams outperform non-Scrum teams working 60 hour weeks. Recommended Reading Everyone considering Scrum should read Mike Cohn’s excellent book, User Stories Applied. Technorati Tags: Agile,Scrum,Waterfall

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • ASP.NET MVC 3 Hosting :: Error Handling and CustomErrors in ASP.NET MVC 3 Framework

    - by C. Miller
    So, what else is new in MVC 3? MVC 3 now has a GlobalFilterCollection that is automatically populated with a HandleErrorAttribute. This default FilterAttribute brings with it a new way of handling errors in your web applications. In short, you can now handle errors inside of the MVC pipeline. What does that mean? This gives you direct programmatic control over handling your 500 errors in the same way that ASP.NET and CustomErrors give you configurable control of handling your HTTP error codes. How does that work out? Think of it as a routing table specifically for your Exceptions, it's pretty sweet! Global Filters The new Global.asax file now has a RegisterGlobalFilters method that is used to add filters to the new GlobalFilterCollection, statically located at System.Web.Mvc.GlobalFilter.Filters. By default this method adds one filter, the HandleErrorAttribute. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute());     } HandleErrorAttributes The HandleErrorAttribute is pretty simple in concept: MVC has already adjusted us to using Filter attributes for our AcceptVerbs and RequiresAuthorization, now we are going to use them for (as the name implies) error handling, and we are going to do so on a (also as the name implies) global scale. The HandleErrorAttribute has properties for ExceptionType, View, and Master. The ExceptionType allows you to specify what exception that attribute should handle. The View allows you to specify which error view (page) you want it to redirect to. Last but not least, the Master allows you to control which master page (or as Razor refers to them, Layout) you want to render with, even if that means overriding the default layout specified in the view itself. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute         {             ExceptionType = typeof(DbException),             // DbError.cshtml is a view in the Shared folder.             View = "DbError",             Order = 2         });         filters.Add(new HandleErrorAttribute());     }Error Views All of your views still work like they did in the previous version of MVC (except of course that they can now use the Razor engine). However, a view that is used to render an error can not have a specified model! This is because they already have a model, and that is System.Web.Mvc.HandleErrorInfo @model System.Web.Mvc.HandleErrorInfo           @{     ViewBag.Title = "DbError"; } <h2>A Database Error Has Occurred</h2> @if (Model != null) {     <p>@Model.Exception.GetType().Name<br />     thrown in @Model.ControllerName @Model.ActionName</p> }Errors Outside of the MVC Pipeline The HandleErrorAttribute will only handle errors that happen inside of the MVC pipeline, better known as 500 errors. Errors outside of the MVC pipeline are still handled the way they have always been with ASP.NET. You turn on custom errors, specify error codes and paths to error pages, etc. It is important to remember that these will happen for anything and everything outside of what the HandleErrorAttribute handles. Also, these will happen whenever an error is not handled with the HandleErrorAttribute from inside of the pipeline. <system.web>  <customErrors mode="On" defaultRedirect="~/error">     <error statusCode="404" redirect="~/error/notfound"></error>  </customErrors>Sample Controllers public class ExampleController : Controller {     public ActionResult Exception()     {         throw new ArgumentNullException();     }     public ActionResult Db()     {         // Inherits from DbException         throw new MyDbException();     } } public class ErrorController : Controller {     public ActionResult Index()     {         return View();     }     public ActionResult NotFound()     {         return View();     } } Putting It All Together If we have all the code above included in our MVC 3 project, here is how the following scenario's will play out: 1.       A controller action throws an Exception. You will remain on the current page and the global HandleErrorAttributes will render the Error view. 2.       A controller action throws any type of DbException. You will remain on the current page and the global HandleErrorAttributes will render the DbError view. 3.       Go to a non-existent page. You will be redirect to the Error controller's NotFound action by the CustomErrors configuration for HTTP StatusCode 404. But don't take my word for it, download the sample project and try it yourself. Three Important Lessons Learned For the most part this is all pretty straight forward, but there are a few gotcha's that you should remember to watch out for: 1) Error views have models, but they must be of type HandleErrorInfo. It is confusing at first to think that you can't control the M in an MVC page, but it's for a good reason. Errors can come from any action in any controller, and no redirect is taking place, so the view engine is just going to render an error view with the only data it has: The HandleError Info model. Do not try to set the model on your error page or pass in a different object through a controller action, it will just blow up and cause a second exception after your first exception! 2) When the HandleErrorAttribute renders a page, it does not pass through a controller or an action. The standard web.config CustomErrors literally redirect a failed request to a new page. The HandleErrorAttribute is just rendering a view, so it is not going to pass through a controller action. But that's ok! Remember, a controller's job is to get the model for a view, but an error already has a model ready to give to the view, thus there is no need to pass through a controller. That being said, the normal ASP.NET custom errors still need to route through controllers. So if you want to share an error page between the HandleErrorAttribute and your web.config redirects, you will need to create a controller action and route for it. But then when you render that error view from your action, you can only use the HandlerErrorInfo model or ViewData dictionary to populate your page. 3) The HandleErrorAttribute obeys if CustomErrors are on or off, but does not use their redirects. If you turn CustomErrors off in your web.config, the HandleErrorAttributes will stop handling errors. However, that is the only configuration these two mechanisms share. The HandleErrorAttribute will not use your defaultRedirect property, or any other errors registered with customer errors. In Summary The HandleErrorAttribute is for displaying 500 errors that were caused by exceptions inside of the MVC pipeline. The custom errors are for redirecting from error pages caused by other HTTP codes.

    Read the article

  • Auto-Configuring SSIS Packages

    - by Davide Mauri
    SSIS Package Configurations are very useful to make packages flexible so that you can change objects properties at run-time and thus make the package configurable without having to open and edit it. In a complex scenario where you have dozen of packages (even in in the smallest BI project I worked on I had 50 packages), each package may have its own configuration needs. This means that each time you have to run the package you have to pass the correct Package Configuration. I usually use XML configuration files and I also force everyone that works with me to make sure that an object that is used in several packages has the same name in all package where it is used, in order to simplify configurations usage. Connection Managers are a good example of one of those objects. For example, all the packages that needs to access to the Data Warehouse database must have a Connection Manager named DWH. Basically we define a set of “global” objects so that we can have a configuration file for them, so that it can be used by all packages. If a package as some specific configuration needs, we create a specific – or “local” – XML configuration file or we set the value that needs to be configured at runtime using DTLoggedExec’s Package Parameters: http://dtloggedexec.davidemauri.it/Package%20Parameters.ashx Now, how we can improve this even more? I’d like to have a package that, when it’s run, automatically goes “somewhere” and search for global or local configuration, loads it and applies it to itself. That’s the basic idea of Auto-Configuring Packages. The “somewhere” is a SQL Server table, defined in this way In this table you’ll put the values that you want to be used at runtime by your package: The ConfigurationFilter column specify to which package that configuration line has to be applied. A package will use that line only if the value specified in the ConfigurationFilter column is equal to its name. In the above sample. only the package named “simple-package” will use the line number two. There is an exception here: the $$Global value indicate a configuration row that has to be applied to any package. With this simple behavior it’s possible to replicate the “global” and the “local” configuration approach I’ve described before. The ConfigurationValue contains the value you want to be applied at runtime and the PackagePath contains the object to which that value will be applied. The ConfiguredValueType column defined the data type of the value and the Checksum column is contains a calculated value that is simply the hash value of ConfigurationFilter plus PackagePath so that it can be used as a Primary Key to guarantee uniqueness of configuration rows. As you may have noticed the table is very similar to the table originally used by SSIS in order to put DTS Configuration into SQL Server tables: SQL Server SSIS Configuration Type: http://msdn.microsoft.com/en-us/library/ms141682.aspx Now, how it works? It’s very easy: you just have to call DTLoggedExec with the /AC option: DTLoggedExec.exe /FILE:”mypackage.dtsx” /AC:"localhost;ssis_auto_configuration;ssiscfg.configuration" the AC option expects a string with the following format: <database_server>;<database_name>;<table_name>; only Windows Authentication is supported. When DTLoggedExec finds an Auto-Configuration request, it injects a new connection manager in the loaded package. The injected connection manager is named $$DTLoggedExec_AutoConfigure and is used by the two SQL Server DTS Configuration ($$DTLoggedExec_Global and $$DTLoggedExec_Local) also injected by DTLoggedExec, used to load “local” and “global” configuration. Now, you may start to wonder why this approach cannot be used without having all this stuff going around, but just passing to a package always two XML DTS Configuration files, (to have to “local” and the “global” configurations) doing something like this: DTLoggedExec.exe /FILE:”mypackage.dtsx” /CONF:”global.dtsConfig” /CONF:”mypackage.dtsConfig” The problem is that this approach doesn’t work if you have, in one of the two configuration file, a value that has to be applied to an object that doesn’t exists in the loaded package. This situation will raise an error that will halt package execution. To solve this problem, you may want to create a configuration file for each package. Unfortunately this will make deployment and management harder, since you’ll have to deal with a great number of configuration files. The Auto-Configuration approach solve all these problems at once! We’re using it in a project where we have hundreds of packages and I can tell you that deployment of packages and their configuration for the pre-production and production environment has never been so easy! To use the Auto-Configuration option you have to download the latest DTLoggedExec release: http://dtloggedexec.codeplex.com/releases/view/62218 Feedback, as usual, are very welcome!

    Read the article

  • Process Centric Banking: Loan Origination Solution

    - by Manish Palaparthy
    There is an old proverb that goes, "The difference between theory and practice is greater in practice than in theory". So, we keep doing numerous "Proof of Concepts" with our own products on various business cases to analyze them deeply, understand and explain to our customers. We then present our learnings as they happened. The awareness of each PoC should help readers increase the trustworthiness of the results coming out of these PoCs. I present one such PoC where we invested a lot of time&effort.  Process Centric Banking : Loan Origination Solution Loan Origination is a process by which a borrower applies for a new loan and the lender processes that application. Loan origination includes the series of steps taken by the bank from the point the customer shows interest in a loan product all the way to disbursal of funds. The Loan Origination process is relevant for many kind of lenders in Financial services: Banks, Credit Unions, NBFCs(Non Banking Financial Companies) and so on. For simplicity sake, I will use "Bank" as the lending institution in the rest of my article.  Loan Origination is one of the core processes for Banks as it is the process by which the it creates assets against which the Institution earns most of its profits from. A well tuned loan origination process can affect the Bank in many positive ways. Banks have always shown great interest in automating the loan origination process for the above reason. However, due the constant changes in customer environment, market dynamics, prevailing economic conditions, cost pressures & regulatory environment they run into lot of challenges. Let me categorize some of these challenges for you Customer Environment Multiple Channels: Customer can use any of the available channels (Internet Banking, Email, Fax, Branch, Phone Banking, ATM, Broker, Mobile, Snail Mail) to perform all or some of the activities related to her Visibility into the origination process: Expect immediate update on the status of loan processing & alert messages Reduced Turn Around Time: Expect loans to be processed with least turn around time Reduced loan processing fees: Partly due to market dynamics the customer expects the loan processing fee to be negligible Market Dynamics Competitive environment:  The competition keeps creating many variants of loan products to attract customers, the bank needs to create similar product variants with better offers to attract customers or keep existing ones Ability to migrate loans from one vendor to another: It has become really easy for retail customers to move from one bank to the other given the low fee of loan processing and highly attractive offers. How does the bank protect it's customer base while actively engaging with potential customers banking with competitor banks Flexibility to react to market developments: Market development greatly influence loan processing, underwriting, asset valuation, risk mitigation rules. Can the bank modify rules and policies, the idea is not just to react to market developments but to pro-actively manage new developments Economic conditions Constant change in various rates and their implications on the rates and rules applied when on-boarding a loan: How quickly can the bank apply changes to rates offered to customers when the central bank changes various rates Requirements of Audit by the central banker: Tough economic conditions have demanded much more stringent audit rules and tests. The banks needs to produce ready reports(historic & operational) for audit compliance Risk Mitigation: While risk mitigation has always been a key concern for the bank, this is the area where the bank's underwriters & risk analysts spend the maximum time when processing a loan application. In order to reduce TAT the bank cannot compromise on its risk mitigation strategies Cost pressures Reduce Cost of processing per application: To deliver a reduced loan processing fee to the customer, the bank needs to keep its cost per processing loan application low. Meet customer TAT expectations while reducing the queues and the systems being used to process the loan application: The loan application could potentially be spending a lot of time waiting in the queue for further processing. Different volumes & patterns of applications demand different queuing algorithms. The bank needs to have real-time visibility into these queues and have the flexibility to change queuing algorithms at runtime  Increase the use of electronic communication and reduce the branch channel usage: Lesser automation leads not only leads to Increased turn around time, it also impacts more costs to reach out to customers The objective of our PoC was to implement a Loan Origination Solution whose ownership lies with the bank and effectively meet the challenges listed above. We built a simple story board for the solution We then went about implementing our storyboard using Oracle BPM Suite, Webcenter Content : Imaging. The web UI has been built on ADF technolgies, while the integration with core-services has been implemented using the underlying SOA infrastructure. The BPM process model is quite exhaustive can meet all the challenges listed above to reasonable degree. A bank intending to implement an end-to-end Loan Origination Solution has multiple options at it's disposal. It can Develop a customer Loan Origination Application from scratch: Gives maximum opportunity to build what you want but inflexible to upgrade and maintain. Higher TCO in long term Buy a Packaged application & customize it: Customizing a generic loan application can be tedious and prove as difficult as above. Build it using many disparate & un-integrated tools: Initially seems easier than developing from scratch. But, without integrated tool sets this is not a viable approach either or A solution based on a Framework: Independent Services and Business Process Modeling provide decoupled architecture that is flexible. We built this framework end-to-end with processes the core process of loan origination & several sub-processes such as Analyse and define customer needs, customer credit verification, identity check processes, legal review process, New customer registration & risk assessment.

    Read the article

  • Simple iOS glDrawElements - BAD_ACCESS

    - by user699215
    You can copy paste this into the default OpenGl template created in Xcode. Why am I not seeing anything :-) It is strange as the glDrawArrays(GL_TRIANGLES, 0, 3); is working fine, but with glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, indices); Is giving BAD_ACCESS? Copy paste this into Xcode default OpenGl template: ViewController #import "ViewController.h" #define BUFFER_OFFSET(i) ((char *)NULL + (i)) // Uniform index. enum { UNIFORM_MODELVIEWPROJECTION_MATRIX, UNIFORM_NORMAL_MATRIX, NUM_UNIFORMS }; GLint uniforms[NUM_UNIFORMS]; // Attribute index. enum { ATTRIB_VERTEX, ATTRIB_NORMAL, NUM_ATTRIBUTES }; @interface ViewController () { GLKMatrix4 _modelViewProjectionMatrix; GLKMatrix3 _normalMatrix; float _rotation; GLuint _vertexArray; GLuint _vertexBuffer; NSArray* arrayOfVertex; } @property (strong, nonatomic) EAGLContext *context; @property (strong, nonatomic) GLKBaseEffect *effect; - (void)setupGL; - (void)tearDownGL; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]; GLKView *view = (GLKView *)self.view; view.context = self.context; view.drawableDepthFormat = GLKViewDrawableDepthFormat24; [self setupGL]; } - (void)dealloc { [self tearDownGL]; if ([EAGLContext currentContext] == self.context) { [EAGLContext setCurrentContext:nil]; } } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; if ([self isViewLoaded] && ([[self view] window] == nil)) { self.view = nil; [self tearDownGL]; if ([EAGLContext currentContext] == self.context) { [EAGLContext setCurrentContext:nil]; } self.context = nil; } // Dispose of any resources that can be recreated. } GLuint vertexBufferID; GLuint indexBufferID; static const GLfloat vertices[9] = { -0.5, -0.5, 0.5, 0.5, -0.5, 0.5, -0.5, 0.5, 0.5 }; static const GLubyte indices[3] = { 0, 1, 2 }; - (void)setupGL { [EAGLContext setCurrentContext:self.context]; // [self loadShaders]; self.effect = [[GLKBaseEffect alloc] init]; self.effect.light0.enabled = GL_TRUE; self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 0.4f, 0.4f, 1.0f); glEnable(GL_DEPTH_TEST); // glGenVertexArraysOES(1, &_vertexArray); // glBindVertexArrayOES(_vertexArray); glGenBuffers(1, &vertexBufferID); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferID); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glGenBuffers(1, &indexBufferID); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, // Specifies the index of the generic vertex attribute to be modified. 3, // Specifies the number of components per generic vertex attribute. Must be 1, 2, 3, 4. GL_FLOAT, // GL_FALSE, // 0, // BUFFER_OFFSET(0)); // // glBindVertexArrayOES(0); } - (void)tearDownGL { [EAGLContext setCurrentContext:self.context]; glDeleteBuffers(1, &_vertexBuffer); glDeleteVertexArraysOES(1, &_vertexArray); self.effect = nil; } #pragma mark - GLKView and GLKViewController delegate methods - (void)update { float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); self.effect.transform.projectionMatrix = projectionMatrix; GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f); // Compute the model view matrix for the object rendered with GLKit GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); self.effect.transform.modelviewMatrix = modelViewMatrix; // Compute the model view matrix for the object rendered with ES2 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); _normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL); _modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); _rotation += self.timeSinceLastUpdate * 0.5f; } int i; - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect { glClearColor(0.65f, 0.65f, 0.65f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // glBindVertexArrayOES(_vertexArray); // Render the object with GLKit [self.effect prepareToDraw]; //glDrawArrays(GL_TRIANGLES, 0, 3); // Render the object again with ES2 // glDrawArrays(GL_TRIANGLES, 0, 3); glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, indices); } @end

    Read the article

  • Binary Search Tree Implementation

    - by Gabe
    I've searched the forum, and tried to implement the code in the threads I found. But I've been working on this real simple program since about 10am, and can't solve the seg. faults for the life of me. Any ideas on what I'm doing wrong would be greatly appreciated. BST.h (All the implementation problems should be in here.) #ifndef BST_H_ #define BST_H_ #include <stdexcept> #include <iostream> #include "btnode.h" using namespace std; /* A class to represent a templated binary search tree. */ template <typename T> class BST { private: //pointer to the root node in the tree BTNode<T>* root; public: //default constructor to make an empty tree BST(); /* You have to document these 4 functions */ void insert(T value); bool search(const T& value) const; bool search(BTNode<T>* node, const T& value) const; void printInOrder() const; void remove(const T& value); //function to print out a visual representation //of the tree (not just print the tree's values //on a single line) void print() const; private: //recursive helper function for "print()" void print(BTNode<T>* node,int depth) const; }; /* Default constructor to make an empty tree */ template <typename T> BST<T>::BST() { root = NULL; } template <typename T> void BST<T>::insert(T value) { BTNode<T>* newNode = new BTNode<T>(value); cout << newNode->data; if(root == NULL) { root = newNode; return; } BTNode<T>* current = new BTNode<T>(NULL); current = root; current->data = root->data; while(true) { if(current->left == NULL && current->right == NULL) break; if(current->right != NULL && current->left != NULL) { if(newNode->data > current->data) current = current->right; else if(newNode->data < current->data) current = current->left; } else if(current->right != NULL && current->left == NULL) { if(newNode->data < current->data) break; else if(newNode->data > current->data) current = current->right; } else if(current->right == NULL && current->left != NULL) { if(newNode->data > current->data) break; else if(newNode->data < current->data) current = current->left; } } if(current->data > newNode->data) current->left = newNode; else current->right = newNode; return; } //public helper function template <typename T> bool BST<T>::search(const T& value) const { return(search(root,value)); //start at the root } //recursive function template <typename T> bool BST<T>::search(BTNode<T>* node, const T& value) const { if(node == NULL || node->data == value) return(node != NULL); //found or couldn't find value else if(value < node->data) return search(node->left,value); //search left subtree else return search(node->right,value); //search right subtree } template <typename T> void BST<T>::printInOrder() const { //print out the value's in the tree in order // //You may need to use this function as a helper //and create a second recursive function //(see "print()" for an example) } template <typename T> void BST<T>::remove(const T& value) { if(root == NULL) { cout << "Tree is empty. No removal. "<<endl; return; } if(!search(value)) { cout << "Value is not in the tree. No removal." << endl; return; } BTNode<T>* current; BTNode<T>* parent; current = root; parent->left = NULL; parent->right = NULL; cout << root->left << "LEFT " << root->right << "RIGHT " << endl; cout << root->data << " ROOT" << endl; cout << current->data << "CURRENT BEFORE" << endl; while(current != NULL) { cout << "INTkhkjhbljkhblkjhlk " << endl; if(current->data == value) break; else if(value > current->data) { parent = current; current = current->right; } else { parent = current; current = current->left; } } cout << current->data << "CURRENT AFTER" << endl; // 3 cases : //We're looking at a leaf node if(current->left == NULL && current->right == NULL) // It's a leaf { if(parent->left == current) parent->left = NULL; else parent->right = NULL; delete current; cout << "The value " << value << " was removed." << endl; return; } // Node with single child if((current->left == NULL && current->right != NULL) || (current->left != NULL && current->right == NULL)) { if(current->left == NULL && current->right != NULL) { if(parent->left == current) { parent->left = current->right; cout << "The value " << value << " was removed." << endl; delete current; } else { parent->right = current->right; cout << "The value " << value << " was removed." << endl; delete current; } } else // left child present, no right child { if(parent->left == current) { parent->left = current->left; cout << "The value " << value << " was removed." << endl; delete current; } else { parent->right = current->left; cout << "The value " << value << " was removed." << endl; delete current; } } return; } //Node with 2 children - Replace node with smallest value in right subtree. if (current->left != NULL && current->right != NULL) { BTNode<T>* check; check = current->right; if((check->left == NULL) && (check->right == NULL)) { current = check; delete check; current->right = NULL; cout << "The value " << value << " was removed." << endl; } else // right child has children { //if the node's right child has a left child; Move all the way down left to locate smallest element if((current->right)->left != NULL) { BTNode<T>* leftCurrent; BTNode<T>* leftParent; leftParent = current->right; leftCurrent = (current->right)->left; while(leftCurrent->left != NULL) { leftParent = leftCurrent; leftCurrent = leftCurrent->left; } current->data = leftCurrent->data; delete leftCurrent; leftParent->left = NULL; cout << "The value " << value << " was removed." << endl; } else { BTNode<T>* temp; temp = current->right; current->data = temp->data; current->right = temp->right; delete temp; cout << "The value " << value << " was removed." << endl; } } return; } } /* Print out the values in the tree and their relationships visually. Sample output: 22 18 15 10 9 5 3 1 */ template <typename T> void BST<T>::print() const { print(root,0); } template <typename T> void BST<T>::print(BTNode<T>* node,int depth) const { if(node == NULL) { std::cout << std::endl; return; } print(node->right,depth+1); for(int i=0; i < depth; i++) { std::cout << "\t"; } std::cout << node->data << std::endl; print(node->left,depth+1); } #endif main.cpp #include "bst.h" #include <iostream> using namespace std; int main() { BST<int> tree; cout << endl << "LAB #13 - BINARY SEARCH TREE PROGRAM" << endl; cout << "----------------------------------------------------------" << endl; // Insert. cout << endl << "INSERT TESTS" << endl; // No duplicates allowed. tree.insert(0); tree.insert(5); tree.insert(15); tree.insert(25); tree.insert(20); // Search. cout << endl << "SEARCH TESTS" << endl; int x = 0; int y = 1; if(tree.search(x)) cout << "The value " << x << " is on the tree." << endl; else cout << "The value " << x << " is NOT on the tree." << endl; if(tree.search(y)) cout << "The value " << y << " is on the tree." << endl; else cout << "The value " << y << " is NOT on the tree." << endl; // Removal. cout << endl << "REMOVAL TESTS" << endl; tree.remove(0); tree.remove(1); tree.remove(20); // Print. cout << endl << "PRINTED DIAGRAM OF BINARY SEARCH TREE" << endl; cout << "----------------------------------------------------------" << endl; tree.print(); cout << endl << "Program terminated. Goodbye." << endl << endl; } BTNode.h #ifndef BTNODE_H_ #define BTNODE_H_ #include <iostream> /* A class to represent a node in a binary search tree. */ template <typename T> class BTNode { public: //constructor BTNode(T d); //the node's data value T data; //pointer to the node's left child BTNode<T>* left; //pointer to the node's right child BTNode<T>* right; }; /* Simple constructor. Sets the data value of the BTNode to "d" and defaults its left and right child pointers to NULL. */ template <typename T> BTNode<T>::BTNode(T d) : left(NULL), right(NULL) { data = d; } #endif Thanks.

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • C#/.NET Little Wonders &ndash; Cross Calling Constructors

    - by James Michael Hare
    Just a small post today, it’s the final iteration before our release and things are crazy here!  This is another little tidbit that I love using, and it should be fairly common knowledge, yet I’ve noticed many times that less experienced developers tend to have redundant constructor code when they overload their constructors. The Problem – repetitive code is less maintainable Let’s say you were designing a messaging system, and so you want to create a class to represent the properties for a Receiver, so perhaps you design a ReceiverProperties class to represent this collection of properties. Perhaps, you decide to make ReceiverProperties immutable, and so you have several constructors that you can use for alternative construction: 1: // Constructs a set of receiver properties. 2: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable, bool isBuffered) 3: { 4: ReceiverType = receiverType; 5: Source = source; 6: IsDurable = isDurable; 7: IsBuffered = isBuffered; 8: } 9: 10: // Constructs a set of receiver properties with buffering on by default. 11: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable) 12: { 13: ReceiverType = receiverType; 14: Source = source; 15: IsDurable = isDurable; 16: IsBuffered = true; 17: } 18:  19: // Constructs a set of receiver properties with buffering on and durability off. 20: public ReceiverProperties(ReceiverType receiverType, string source) 21: { 22: ReceiverType = receiverType; 23: Source = source; 24: IsDurable = false; 25: IsBuffered = true; 26: } Note: keep in mind this is just a simple example for illustration, and in same cases default parameters can also help clean this up, but they have issues of their own. While strictly speaking, there is nothing wrong with this code, logically, it suffers from maintainability flaws.  Consider what happens if you add a new property to the class?  You have to remember to guarantee that it is set appropriately in every constructor call. This can cause subtle bugs and becomes even uglier when the constructors do more complex logic, error handling, or there are numerous potential overloads (especially if you can’t easily see them all on one screen’s height). The Solution – cross-calling constructors I’d wager nearly everyone knows how to call your base class’s constructor, but you can also cross-call to one of the constructors in the same class by using the this keyword in the same way you use base to call a base constructor. 1: // Constructs a set of receiver properties. 2: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable, bool isBuffered) 3: { 4: ReceiverType = receiverType; 5: Source = source; 6: IsDurable = isDurable; 7: IsBuffered = isBuffered; 8: } 9: 10: // Constructs a set of receiver properties with buffering on by default. 11: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable) 12: : this(receiverType, source, isDurable, true) 13: { 14: } 15:  16: // Constructs a set of receiver properties with buffering on and durability off. 17: public ReceiverProperties(ReceiverType receiverType, string source) 18: : this(receiverType, source, false, true) 19: { 20: } Notice, there is much less code.  In addition, the code you have has no repetitive logic.  You can define the main constructor that takes all arguments, and the remaining constructors with defaults simply cross-call the main constructor, passing in the defaults. Yes, in some cases default parameters can ease some of this for you, but default parameters only work for compile-time constants (null, string and number literals).  For example, if you were creating a TradingDataAdapter that relied on an implementation of ITradingDao which is the data access object to retreive records from the database, you might want two constructors: one that takes an ITradingDao reference, and a default constructor which constructs a specific ITradingDao for ease of use: 1: public TradingDataAdapter(ITradingDao dao) 2: { 3: _tradingDao = dao; 4:  5: // other constructor logic 6: } 7:  8: public TradingDataAdapter() 9: { 10: _tradingDao = new SqlTradingDao(); 11:  12: // same constructor logic as above 13: }   As you can see, this isn’t something we can solve with a default parameter, but we could with cross-calling constructors: 1: public TradingDataAdapter(ITradingDao dao) 2: { 3: _tradingDao = dao; 4:  5: // other constructor logic 6: } 7:  8: public TradingDataAdapter() 9: : this(new SqlTradingDao()) 10: { 11: }   So in cases like this where you have constructors with non compiler-time constant defaults, default parameters can’t help you and cross-calling constructors is one of your best options. Summary When you have just one constructor doing the job of initializing the class, you can consolidate all your logic and error-handling in one place, thus ensuring that your behavior will be consistent across the constructor calls. This makes the code more maintainable and even easier to read.  There will be some cases where cross-calling constructors may be sub-optimal or not possible (if, for example, the overloaded constructors take completely different types and are not just “defaulting” behaviors). You can also use default parameters, of course, but default parameter behavior in a class hierarchy can be problematic (default values are not inherited and in fact can differ) so sometimes multiple constructors are actually preferable. Regardless of why you may need to have multiple constructors, consider cross-calling where you can to reduce redundant logic and clean up the code.   Technorati Tags: C#,.NET,Little Wonders

    Read the article

< Previous Page | 860 861 862 863 864 865 866 867 868 869 870 871  | Next Page >