Search Results

Search found 11400 results on 456 pages for 'automated testing'.

Page 168/456 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Guidance on building an au pair-to-family networking site.

    - by Philip Kidd
    I'm building a website for an au pair agency business that will connect au pairs to families around Europe. I know nothing about website building, HTML etc. so I'm using a wysiwyg editer (weebly). How I would like the site to function: Families upload their information into profiles Au pairs do the same families can view a limited part of an au pairs' profile until they pay a deposit After deposit is payed, all au pairs' profile information becomes open to families Families can order au pairs and confirm their order with another payment payment must be made before 'order' is confirmed By 'order' I mean full communications become open between the family and the au pair they have 'ordered' as well as travel information being sent to another agency the site needs to be linked with a bank account (e.g paypal) and another agency, who will look after the flight bookings etc. A website already exists for this business however it just contains information on the business and application forms - if the site becomes fully automated it will relieve a lot of strain on administration in the office (dealing with applications, travel information etc.)

    Read the article

  • Guidance: How to layout you files for an Ideal Solution

    - by Martin Hinshelwood
    Creating a solution and having it maintainable over time is an art and not a science. I like being pedantic and having a place for everything, no matter how small. For setting up the Areas to run Multiple projects under one solution see my post on  When should I use Areas in TFS instead of Team Projects and for an explanation of branching see Guidance: A Branching strategy for Scrum Teams. Update 17th May 2010 – We are currently trialling running a single Sprint branch to improve our history. Whenever I setup a new Team Project I implement the basic version control structure. I put “readme.txt” files in the folder structure explaining the different levels, and a solution file called “[Client].[Product].sln” located at “$/[Client]/[Product]/DEV/Main” within version control. Developers should add any projects you need to create to that solution in the format “[Client].[Product].[ProductArea].[Assembly]” and they will automatically be picked up and built automatically when you setup Automated Builds using Team Foundation Build. All test projects need to be done using MSTest to get proper IDE and Team Foundation Build integration out-of-the-box and be named for the assembly that it is testing with a naming convention of “[Client].[Product].[ProductArea].[Assembly].Tests” Here is a description of the folder layout; this content should be replicated in readme files under version control in the relevant locations so that even developers new to the project can see how to do it. Figure: The Team Project level - at this level there should be a folder for each the products that you are building if you are using Areas correctly in TFS 2010. You should try very hard to avoided spaces as these things always end up in a URL eventually e.g. "Code Auditor" should be "CodeAuditor". Figure: Product Level - At this level there should be only 3 folders (DEV, RELESE and SAFE) all of which should be in capitals. These folders represent the three stages of your application production line. Each of them may contain multiple branches but this format leaves all of your branches at the same level. Figure: The DEV folder is where all of the Development branches reside. The DEV folder will contain the "Main" branch and all feature branches is they are being used. The DEV designation specifies that all code in every branch under this folder has not been released or made ready for release. And feature branches MUST merge (Forward Integrate) from Main and stabilise prior to merging (Reverse Integration) back down into Main and being decommissioned. Figure: In the Feature branching scenario only merges are allowed onto Main, no development can be done there. Once we have a mature product it is important that new features being developed in parallel are kept separate. This would most likely be used if we had more than one Scrum team working on a single product. Figure: when we are ready to do a release of our software we will create a release branch that is then stabilised prior to deployment. This protects the serviceability of of our released code allowing developers to fix bugs and re-release an existing version. Figure: All bugs found on a release are fixed on the release.  All bugs found in a release are fixed on the release and a new deployment is created. After the deployment is created the bug fixes are then merged (Reverse Integration) into the Main branch. We do this so that we separate out our development from our production ready code.  Figure: SAFE or RTM is a read only record of what you actually released. Labels are not immutable so are useless in this circumstance.  When we have completed stabilisation of the release branch and we are ready to deploy to production we create a read-only copy of the code for reference. In some cases this could be a regulatory concern, but in most cases it protects the company building the product from legal entanglements based on what you did or did not release. Figure: This allows us to reference any particular version of our application that was ever shipped.   In addition I am an advocate of having a single solution with all the Project folders directly under the “Trunk”/”Main” folder and using the full name for the project folders.. Figure: The ideal solution If you must have multiple solutions, because you need to use more than one version of Visual Studio, name the solutions “[Client].[Product][VSVersion].sln” and have it reside in the same folder as the other solution. This makes it easier for Automated build and improves the discoverability of your code and its dependencies. Send me your feedback!   Technorati Tags: VS ALM,VSTS Developing,VS 2010,VS 2008,TFS 2010,TFS 2008,TFBS

    Read the article

  • C#/.NET Little Wonders: The Generic Func Delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Back in one of my three original “Little Wonders” Trilogy of posts, I had listed generic delegates as one of the Little Wonders of .NET.  Later, someone posted a comment saying said that they would love more detail on the generic delegates and their uses, since my original entry just scratched the surface of them. Last week, I began our look at some of the handy generic delegates built into .NET with a description of delegates in general, and the Action family of delegates.  For this week, I’ll launch into a look at the Func family of generic delegates and how they can be used to support generic, reusable algorithms and classes. Quick Delegate Recap Delegates are similar to function pointers in C++ in that they allow you to store a reference to a method.  They can store references to either static or instance methods, and can actually be used to chain several methods together in one delegate. Delegates are very type-safe and can be satisfied with any standard method, anonymous method, or a lambda expression.  They can also be null as well (refers to no method), so care should be taken to make sure that the delegate is not null before you invoke it. Delegates are defined using the keyword delegate, where the delegate’s type name is placed where you would typically place the method name: 1: // This delegate matches any method that takes string, returns nothing 2: public delegate void Log(string message); This delegate defines a delegate type named Log that can be used to store references to any method(s) that satisfies its signature (whether instance, static, lambda expression, etc.). Delegate instances then can be assigned zero (null) or more methods using the operator = which replaces the existing delegate chain, or by using the operator += which adds a method to the end of a delegate chain: 1: // creates a delegate instance named currentLogger defaulted to Console.WriteLine (static method) 2: Log currentLogger = Console.Out.WriteLine; 3:  4: // invokes the delegate, which writes to the console out 5: currentLogger("Hi Standard Out!"); 6:  7: // append a delegate to Console.Error.WriteLine to go to std error 8: currentLogger += Console.Error.WriteLine; 9:  10: // invokes the delegate chain and writes message to std out and std err 11: currentLogger("Hi Standard Out and Error!"); While delegates give us a lot of power, it can be cumbersome to re-create fairly standard delegate definitions repeatedly, for this purpose the generic delegates were introduced in various stages in .NET.  These support various method types with particular signatures. Note: a caveat with generic delegates is that while they can support multiple parameters, they do not match methods that contains ref or out parameters. If you want to a delegate to represent methods that takes ref or out parameters, you will need to create a custom delegate. We’ve got the Func… delegates Just like it’s cousin, the Action delegate family, the Func delegate family gives us a lot of power to use generic delegates to make classes and algorithms more generic.  Using them keeps us from having to define a new delegate type when need to make a class or algorithm generic. Remember that the point of the Action delegate family was to be able to perform an “action” on an item, with no return results.  Thus Action delegates can be used to represent most methods that take 0 to 16 arguments but return void.  You can assign a method The Func delegate family was introduced in .NET 3.5 with the advent of LINQ, and gives us the power to define a function that can be called on 0 to 16 arguments and returns a result.  Thus, the main difference between Action and Func, from a delegate perspective, is that Actions return nothing, but Funcs return a result. The Func family of delegates have signatures as follows: Func<TResult> – matches a method that takes no arguments, and returns value of type TResult. Func<T, TResult> – matches a method that takes an argument of type T, and returns value of type TResult. Func<T1, T2, TResult> – matches a method that takes arguments of type T1 and T2, and returns value of type TResult. Func<T1, T2, …, TResult> – and so on up to 16 arguments, and returns value of type TResult. These are handy because they quickly allow you to be able to specify that a method or class you design will perform a function to produce a result as long as the method you specify meets the signature. For example, let’s say you were designing a generic aggregator, and you wanted to allow the user to define how the values will be aggregated into the result (i.e. Sum, Min, Max, etc…).  To do this, we would ask the user of our class to pass in a method that would take the current total, the next value, and produce a new total.  A class like this could look like: 1: public sealed class Aggregator<TValue, TResult> 2: { 3: // holds method that takes previous result, combines with next value, creates new result 4: private Func<TResult, TValue, TResult> _aggregationMethod; 5:  6: // gets or sets the current result of aggregation 7: public TResult Result { get; private set; } 8:  9: // construct the aggregator given the method to use to aggregate values 10: public Aggregator(Func<TResult, TValue, TResult> aggregationMethod = null) 11: { 12: if (aggregationMethod == null) throw new ArgumentNullException("aggregationMethod"); 13:  14: _aggregationMethod = aggregationMethod; 15: } 16:  17: // method to add next value 18: public void Aggregate(TValue nextValue) 19: { 20: // performs the aggregation method function on the current result and next and sets to current result 21: Result = _aggregationMethod(Result, nextValue); 22: } 23: } Of course, LINQ already has an Aggregate extension method, but that works on a sequence of IEnumerable<T>, whereas this is designed to work more with aggregating single results over time (such as keeping track of a max response time for a service). We could then use this generic aggregator to find the sum of a series of values over time, or the max of a series of values over time (among other things): 1: // creates an aggregator that adds the next to the total to sum the values 2: var sumAggregator = new Aggregator<int, int>((total, next) => total + next); 3:  4: // creates an aggregator (using static method) that returns the max of previous result and next 5: var maxAggregator = new Aggregator<int, int>(Math.Max); So, if we were timing the response time of a web method every time it was called, we could pass that response time to both of these aggregators to get an idea of the total time spent in that web method, and the max time spent in any one call to the web method: 1: // total will be 13 and max 13 2: int responseTime = 13; 3: sumAggregator.Aggregate(responseTime); 4: maxAggregator.Aggregate(responseTime); 5:  6: // total will be 20 and max still 13 7: responseTime = 7; 8: sumAggregator.Aggregate(responseTime); 9: maxAggregator.Aggregate(responseTime); 10:  11: // total will be 40 and max now 20 12: responseTime = 20; 13: sumAggregator.Aggregate(responseTime); 14: maxAggregator.Aggregate(responseTime); The Func delegate family is useful for making generic algorithms and classes, and in particular allows the caller of the method or user of the class to specify a function to be performed in order to generate a result. What is the result of a Func delegate chain? If you remember, we said earlier that you can assign multiple methods to a delegate by using the += operator to chain them.  So how does this affect delegates such as Func that return a value, when applied to something like the code below? 1: Func<int, int, int> combo = null; 2:  3: // What if we wanted to aggregate the sum and max together? 4: combo += (total, next) => total + next; 5: combo += Math.Max; 6:  7: // what is the result? 8: var comboAggregator = new Aggregator<int, int>(combo); Well, in .NET if you chain multiple methods in a delegate, they will all get invoked, but the result of the delegate is the result of the last method invoked in the chain.  Thus, this aggregator would always result in the Math.Max() result.  The other chained method (the sum) gets executed first, but it’s result is thrown away: 1: // result is 13 2: int responseTime = 13; 3: comboAggregator.Aggregate(responseTime); 4:  5: // result is still 13 6: responseTime = 7; 7: comboAggregator.Aggregate(responseTime); 8:  9: // result is now 20 10: responseTime = 20; 11: comboAggregator.Aggregate(responseTime); So remember, you can chain multiple Func (or other delegates that return values) together, but if you do so you will only get the last executed result. Func delegates and co-variance/contra-variance in .NET 4.0 Just like the Action delegate, as of .NET 4.0, the Func delegate family is contra-variant on its arguments.  In addition, it is co-variant on its return type.  To support this, in .NET 4.0 the signatures of the Func delegates changed to: Func<out TResult> – matches a method that takes no arguments, and returns value of type TResult (or a more derived type). Func<in T, out TResult> – matches a method that takes an argument of type T (or a less derived type), and returns value of type TResult(or a more derived type). Func<in T1, in T2, out TResult> – matches a method that takes arguments of type T1 and T2 (or less derived types), and returns value of type TResult (or a more derived type). Func<in T1, in T2, …, out TResult> – and so on up to 16 arguments, and returns value of type TResult (or a more derived type). Notice the addition of the in and out keywords before each of the generic type placeholders.  As we saw last week, the in keyword is used to specify that a generic type can be contra-variant -- it can match the given type or a type that is less derived.  However, the out keyword, is used to specify that a generic type can be co-variant -- it can match the given type or a type that is more derived. On contra-variance, if you are saying you need an function that will accept a string, you can just as easily give it an function that accepts an object.  In other words, if you say “give me an function that will process dogs”, I could pass you a method that will process any animal, because all dogs are animals.  On the co-variance side, if you are saying you need a function that returns an object, you can just as easily pass it a function that returns a string because any string returned from the given method can be accepted by a delegate expecting an object result, since string is more derived.  Once again, in other words, if you say “give me a method that creates an animal”, I can pass you a method that will create a dog, because all dogs are animals. It really all makes sense, you can pass a more specific thing to a less specific parameter, and you can return a more specific thing as a less specific result.  In other words, pay attention to the direction the item travels (parameters go in, results come out).  Keeping that in mind, you can always pass more specific things in and return more specific things out. For example, in the code below, we have a method that takes a Func<object> to generate an object, but we can pass it a Func<string> because the return type of object can obviously accept a return value of string as well: 1: // since Func<object> is co-variant, this will access Func<string>, etc... 2: public static string Sequence(int count, Func<object> generator) 3: { 4: var builder = new StringBuilder(); 5:  6: for (int i=0; i<count; i++) 7: { 8: object value = generator(); 9: builder.Append(value); 10: } 11:  12: return builder.ToString(); 13: } Even though the method above takes a Func<object>, we can pass a Func<string> because the TResult type placeholder is co-variant and accepts types that are more derived as well: 1: // delegate that's typed to return string. 2: Func<string> stringGenerator = () => DateTime.Now.ToString(); 3:  4: // This will work in .NET 4.0, but not in previous versions 5: Sequence(100, stringGenerator); Previous versions of .NET implemented some forms of co-variance and contra-variance before, but .NET 4.0 goes one step further and allows you to pass or assign an Func<A, BResult> to a Func<Y, ZResult> as long as A is less derived (or same) as Y, and BResult is more derived (or same) as ZResult. Sidebar: The Func and the Predicate A method that takes one argument and returns a bool is generally thought of as a predicate.  Predicates are used to examine an item and determine whether that item satisfies a particular condition.  Predicates are typically unary, but you may also have binary and other predicates as well. Predicates are often used to filter results, such as in the LINQ Where() extension method: 1: var numbers = new[] { 1, 2, 4, 13, 8, 10, 27 }; 2:  3: // call Where() using a predicate which determines if the number is even 4: var evens = numbers.Where(num => num % 2 == 0); As of .NET 3.5, predicates are typically represented as Func<T, bool> where T is the type of the item to examine.  Previous to .NET 3.5, there was a Predicate<T> type that tended to be used (which we’ll discuss next week) and is still supported, but most developers recommend using Func<T, bool> now, as it prevents confusion with overloads that accept unary predicates and binary predicates, etc.: 1: // this seems more confusing as an overload set, because of Predicate vs Func 2: public static SomeMethod(Predicate<int> unaryPredicate) { } 3: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } 4:  5: // this seems more consistent as an overload set, since just uses Func 6: public static SomeMethod(Func<int, bool> unaryPredicate) { } 7: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } Also, even though Predicate<T> and Func<T, bool> match the same signatures, they are separate types!  Thus you cannot assign a Predicate<T> instance to a Func<T, bool> instance and vice versa: 1: // the same method, lambda expression, etc can be assigned to both 2: Predicate<int> isEven = i => (i % 2) == 0; 3: Func<int, bool> alsoIsEven = i => (i % 2) == 0; 4:  5: // but the delegate instances cannot be directly assigned, strongly typed! 6: // ERROR: cannot convert type... 7: isEven = alsoIsEven; 8:  9: // however, you can assign by wrapping in a new instance: 10: isEven = new Predicate<int>(alsoIsEven); 11: alsoIsEven = new Func<int, bool>(isEven); So, the general advice that seems to come from most developers is that Predicate<T> is still supported, but we should use Func<T, bool> for consistency in .NET 3.5 and above. Sidebar: Func as a Generator for Unit Testing One area of difficulty in unit testing can be unit testing code that is based on time of day.  We’d still want to unit test our code to make sure the logic is accurate, but we don’t want the results of our unit tests to be dependent on the time they are run. One way (of many) around this is to create an internal generator that will produce the “current” time of day.  This would default to returning result from DateTime.Now (or some other method), but we could inject specific times for our unit testing.  Generators are typically methods that return (generate) a value for use in a class/method. For example, say we are creating a CacheItem<T> class that represents an item in the cache, and we want to make sure the item shows as expired if the age is more than 30 seconds.  Such a class could look like: 1: // responsible for maintaining an item of type T in the cache 2: public sealed class CacheItem<T> 3: { 4: // helper method that returns the current time 5: private static Func<DateTime> _timeGenerator = () => DateTime.Now; 6:  7: // allows internal access to the time generator 8: internal static Func<DateTime> TimeGenerator 9: { 10: get { return _timeGenerator; } 11: set { _timeGenerator = value; } 12: } 13:  14: // time the item was cached 15: public DateTime CachedTime { get; private set; } 16:  17: // the item cached 18: public T Value { get; private set; } 19:  20: // item is expired if older than 30 seconds 21: public bool IsExpired 22: { 23: get { return _timeGenerator() - CachedTime > TimeSpan.FromSeconds(30.0); } 24: } 25:  26: // creates the new cached item, setting cached time to "current" time 27: public CacheItem(T value) 28: { 29: Value = value; 30: CachedTime = _timeGenerator(); 31: } 32: } Then, we can use this construct to unit test our CacheItem<T> without any time dependencies: 1: var baseTime = DateTime.Now; 2:  3: // start with current time stored above (so doesn't drift) 4: CacheItem<int>.TimeGenerator = () => baseTime; 5:  6: var target = new CacheItem<int>(13); 7:  8: // now add 15 seconds, should still be non-expired 9: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(15); 10:  11: Assert.IsFalse(target.IsExpired); 12:  13: // now add 31 seconds, should now be expired 14: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(31); 15:  16: Assert.IsTrue(target.IsExpired); Now we can unit test for 1 second before, 1 second after, 1 millisecond before, 1 day after, etc.  Func delegates can be a handy tool for this type of value generation to support more testable code.  Summary Generic delegates give us a lot of power to make truly generic algorithms and classes.  The Func family of delegates is a great way to be able to specify functions to calculate a result based on 0-16 arguments.  Stay tuned in the weeks that follow for other generic delegates in the .NET Framework!   Tweet Technorati Tags: .NET, C#, CSharp, Little Wonders, Generics, Func, Delegates

    Read the article

  • Example sites which use UCC certificates

    - by Brian
    Can anyone point me to a few sites that make use of a UCC (SAN) certificates? I tried to search for this but found a lot of information about UCC certficates without any examples. As a sanity check before buying/configuring a UCC certificate, I wish to do some basic testing to determine exactly how the certificate will look in different browsers. Yes, I realize I could just use makecert instead. I would rather just look at them in the wild.

    Read the article

  • Useful utilities - LINQPAD

    - by TATWORTH
    Recently I came across LINQPAD at http://www.linqpad.net/ a free utility by Joseph Alabahari. This is an excellent tool for developing and testing LINQ queries before you incorporate them into your C# programs. If you get stuck as I did at one point recently there is the MSDN Linq forum at http://forums.microsoft.com/MSDN/ShowForum.aspx?siteid=1&ForumID=123 where  you can ask for help.

    Read the article

  • How to do more with Oracle VM Templates

    - by uwes
    On Oracle's Virtualization Blog you can find an interesting post regarding working and using Oracle VM Templates, title is "Opening The Oracle VM Templates Blackbox". Monica Kumar gives a brief explanation what Oracle VM Guest Additions are and how they can help to work smarter with Templates. At the end you will find a hint to join an up coming webcast (October 24th), where you can get more knowledge from experts like Robbie de Meyer or Saar Maoz. Register for the live Webcast. View the whitepaper on Oracle VM Templates Automated Virtual Machine Provisioning.

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • Responsive Design: Media query fix for IE10 on Windows Phone 8

    - by ihaynes
    Originally posted on: http://geekswithblogs.net/ihaynes/archive/2013/07/01/responsive-design-media-query-fix-for-ie10-on--windows.aspxThe version of IE10 on Windows Phone 8 apparently has a bug which results in media queries not seeing the correct device width.This post from Devhammer explains all.http://devhammer.net/responsive-design-fix-for-windows-phone-8-device-adaptationI'd not noticed this on the WP8 Emulator which proves yet again that testing on real devices is essential.

    Read the article

  • Do More, Spend Less, Speed Time to Market – All with Oracle Database Appliance.

    - by jgelhaus
    Do More, Spend Less, Speed Time to Market – All with Oracle Database Appliance. Join Oracle for a first hand experience that will highlight how your business can lower TCO for hardware and software, do more with your existing personnel and resources, and get your products to market faster with Oracle Database Appliance. Learn how you can take advantage of the world's most popular database – Oracle Database 11g – in a single solution that's affordable, provides automated installation, is easy to manage, and is supported end-to-end by Oracle. Oracle Database Appliance is the complete package: software, server, storage, and networking, all designed by Oracle to simplify your technology and let you get down to business. Webcast Schedule Wednesday, April 4 1:00pm Eastern Webcast Link Teleconference: 1-866-753-5684 Conference Code: 61908866 Passcode: oda Add meeting to your calendar Wednesday, April 11 1:00pm Eastern Webcast Link Teleconference: 1-866-753-5684 Conference Code: 61909590 Passcode: oda Add meeting to your calendar Wednesday, April 18 1:00pm Eastern Webcast Link Teleconference: 1-866-753-5684 Conference Code: 61910385 Passcode: oda Add meeting to your calendar

    Read the article

  • Monitoring Your Servers

    - by Grant Fritchey
    If you are the DBA in a large scale enterprise, you’re probably already monitoring your servers for up-time and performance. But if you work for a medium-sized business, a small shop, or even a one-man operation, chances are pretty good that you’re not doing that sort of monitoring. You know that you’re supposed to be doing it, but other things, more important at-the-moment things, keep getting in the way. After all, which is more important, some monitoring or backup testing?  Backup testing, of course. Monitoring is frequently one of those things that you do when can get around to it.  Well, as you can see at the right, I have your round tuit ready to go. What if I told you that you could get monitoring on your servers for up-time, job completion, performance, all the standard stuff? And what if I told you that you wouldn’t need to install and configure another server in your environment to get it done? And what if I told you that you’d be able to set up and customize your alerts so you could know if your server was offline or a drive was full? Almost nothing for you to do, and you’ll have a full-blown monitoring process. Sounds to good to be true doesn’t it? Well, it’s coming. We’re creating an online, remote, monitoring system here at Red Gate. You’ll be able to use our SQL Monitor tool (which you can see here, monitoring SQL Server Central in real time) to keep track of your systems, but without having to set up a server and a database for storing the information collected. Instead, we’re taking advantage of services available through the internet to enable collection and storage of this information remotely, off your systems. All you have to do is install a piece of software that will communicate between our service and your servers and you’ll be off and running. It’s that easy. Before you get too excited, let me break the news that this is the near future I’m talking about. We’re setting up the program and there’s a sign-up you can use to get in on the initial tests.

    Read the article

  • Security of logging people in automatically from another app?

    - by Simon
    I have 2 apps. They both have accounts, and each account has users. These apps are going to share the same users and accounts and they will always be in sync. I want to be able to login automatically from one app to the other. So my solution is to generate a login_key, for example: 2sa7439e-a570-ac21-a2ao-z1qia9ca6g25 once a day. And provide a automated login link to the other app... for example if the user clicks on: https://account_name.securityhole.io/login/2sa7439e-a570-ac21-a2ao-z1qia9ca6g25/user/123 They are logged in automatically, session created. So here we have 3 things that a intruder has to get right in order to gain access; account name, login key, and the user id. Bad idea? Or should I can down the path of making one app an oauth provider? Or is there a better way?

    Read the article

  • SQL Azure Data Sync

    - by kaleidoscope
    The Microsoft Sync Framework Power Pack for SQL Azure contains a series of components that improve the experience of synchronizing with SQL Azure. This includes runtime components that optimize performance and simplify the process of synchronizing with the cloud. SQL Azure Data Sync allows developers and DBA's to: · Link existing on-premises data stores to SQL Azure. · Create new applications in Windows Azure without abandoning existing on-premises applications. · Extend on-premises data to remote offices, retail stores and mobile workers via the cloud. · Take Windows Azure and SQL Azure based web application offline to provide an “Outlook like” cached-mode experience. The Microsoft Sync Framework Power Pack for SQL Azure is comprised of the following: · SqlAzureSyncProvider · Sql Azure Offline Visual Studio Plug-In · SQL Azure Data Sync Tool for SQL Server · New SQL Azure Events Automated Provisioning Geeta

    Read the article

  • Oracle BPM: Adding an attachment during the Human Task Initialization

    - by kyap
    Recently I had the requirement from a customer to instantiate a Human Task, which can accept a payload containing a binary attribute (base64) representing an actual document. According to the same requirement, this attribute should be shown as a hyperlink in the Worklist UI to the assignee(s), from which the assignees can download the document on the local machine for review. Multiple options have been leverage, but most required heavy customization.  In order to leverage as much as possible Oracle BPM out-of-the box functionalities, I decided to add this document as a readonly attachment. We can easily achieve this operation within Worklist Application, but it is a bit more challenging when we want to attach the document during the Human Task initialization.  After some investigations (on BPM 11g PS4FP and PS5), here's the way to go: 1. Create an asynchronous BPM process, and use this xsd to create 2 Business Objects FullPayload and PartialPayload : 2. Create 2 process variables 'vFullPayload' and 'vPartialPayload' using this Business Objects created above 3. Implement the Start Event with the initial Data Association, with an input argument using 'FullPayload' Business Object type 4. Drag in an User Task into the process. Implement the User Task as usual by using 'vPartialPayload' type as the input type and assign the task to your favorite tester (mine is jcooper) 5. Here's the main course - Start the Data Association and map the payload into 'execData' as follow: FROM TO  vFullPayload.attachment.mimetype  execData.attachment[1].mimeType  vFullPayload.attachment.filename  execData.attachment[1].name  bpmn:getDataObject('vFullPayload')/ns:attachment/ns:content  execData.attachment[1].content  'BPM'  execData.attachment[1].attachmentScope false()  execData.attachment[1].doesBelongToParent 'weblogic'  execData.attachment[1].updateBy  xp20:current-dateTime()  execData.attachment[1].updateDate (Note: Check the <Humantask>WorkflowTask.xsd file in your project xsd folder to discover the different options for attachmentScope & storageType) 6. Your process is completed. Just build a standard ADF UI and deploy the process/UI onto your BPM Server for the testing. Here's an example, with a base64 encoded pdf file: application-pdf.txt 7. Finally, go to the BPM Worklist application to check the result ! Please note that Oracle BPM, by default, limits the attachment document size to 2Mb. If you are planning to have bigger attachments in your process, it is recommended to store your documents in a Content Management server (such as Oracle UCM) and pass the reference instead. It is possible to configure Oracle BPM to store attachment directly into Oracle UCM too, and I believe we can use the storageType, ucmMetadataItem attributes for this purpose.... I will confirm once I have access onto an Oracle UCM for the testing :)

    Read the article

  • Philly GiveCamp 2010

    - by wulfers
    Spent the weekend helping out several non-profits doing what we like to do best...  Designing, developing and making people very happy with their new websites, systems, applications and features.  Form what I saw at this GiveCamp about 75 percent of the non-profits needed updated or new websites supporting CMS features and the ability for staff members to update the content on their websites.... Some cool apps were designed and developed..... A centralized system for distribution of daily schedules and task assistance to autistic clients was a show stopper with an awesome central management interface, IPhone and in the future windows phone support for tactile, auditory and visual cues. SharePoint was upgraded and forms were automated for a volunteer fire company that desperately needed some automation to help the fireman do their primary job. Many cool sites for non-profits that had ether an outdated or non existent web presence.

    Read the article

  • Ops Center 12.1.4 Released

    - by Owen Allen
    Ops Center version 12.1.4.0.0, an update for 12c, has just been released. There are a few new features. The biggest one is support for multiple Automated Installer install services, which means that you can provision any version of Oracle Solaris 11, rather than just one. We've also added support for multi-file VM templates, and enhanced the network configuration support for vServers. You can take a look at the What's New document for more information about the new features. If you're already using Ops Center, you can download the 12.1.4 upgrade through the UI, or get it from the Oracle Tech Network or from e-delivery. The Upgrade chapter in the Admin Guide explains how to perform the upgrade.

    Read the article

  • Has RFC2324 been implemented?

    - by anthony-arnold
    I know RFC2324 was an April Fools joke. However, it seems pretty well thought out, and after reading it I figured it wouldn't be out of the question to design an automated coffee machine that used this extension to HTTP. We programmers love to reference this RFC when arguing web standards ("418 I'm a Teapot lolz!") but the joke's kind of on us. Ubiquitous computing research assumes that network-connected coffee machines are probably going to be quite common in the future, along with Internet-connected fruit and just about everything else. Has anyone actually implemented a coffee machine that is controlled via HTCPCP? Not necessarily commercial, but hacked together in a garage, maybe? I'm not talking about just a web server that responds to HTCPCP requests; I mean a real coffee machine that actually makes coffee. I haven't seen an example of that.

    Read the article

  • ArchBeat Link-o-Rama for 11/17/2011

    - by Bob Rhubart
    Building an Infrastructure Cloud with Oracle VM for x86 + Enterprise Manager 12c | Richard Rotter Richard Rotter demonstrates "how easy it could be to build a cloud infrastructure with Oracle's solution for cloud computing." Article: Social + Lean = Agile | Dave Duggal In today’s increasingly dynamic business environment, organizations must continuously adapt to survive. Change management has become a major bottleneck. Organizations’ need a practical mechanism for managing controlled variance and change in-flight to break the logjam. This paper provides a foundation for applying lean and agile principles to achieve Enterprise Agility through social collaboration. Stress Testing Java EE 6 Applications - Free Article In Free Java Magazine : Adam Bien "It is strange," says Adam Bien, "everyone is obsessed about green bars and code coverage, but testing of multi threaded behavior is widely ignored - until the applications run into massive problems." Using Access Manager to Secure Applications Deployed on WebLogic | Rene van Wijk Another great how-to post from Oracle ACE Rene van Wijk, this time involving JBoss RichFaces, Facelets, Oracle Coherence, and Oracle WebLogic Server. DOAG 2011 vs. Devoxx - Value and Attraction | Markus Eisele Oracle ACE Director Markus Eisele compares and contrasts these popular conferences with the aim of helping others decide which to attend. SOA All the Time; Architects in AZ; Clearing Info Integration hurdles SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. Webcast: Oracle Business Intelligence Mobile Event Date: Wednesday, December 7, 2011 Time: 10 a.m. PT/1 p.m. ET Featuring Manan Goel (Director BI Product Marketing, Oracle) and Shailesh Shedge (Director BI and Analytics Practice, Ascentt). Webcast: Maximum Availability on Private Clouds A discussion of Oracle’s Maximum Availability Architecture, Oracle Database 11g, Oracle Exadata Database Machine, and Oracle Database appliance, featuring Margaret Hamburger (Director, Product Marketing, Oracle) and Joe Meeks (Director, Product Management, Oracle). November 30, 2011 at 10:00am PT / 1:00pm ET. Oracle Technology Network Architect Day - Phoenix, AZ Wednesday December 14, 2011, 8:30am - 5:00pm. The Ritz-Carlton, Phoenix, 2401 East Camelback Road, Phoenix, AZ 85016. Registration is free, but seating is limited.

    Read the article

  • SQL SERVER – A Brief Note on SET TEXTSIZE

    - by pinaldave
    Here is a small conversation I received. I thought though an old topic, indeed a thought provoking for the moment. Question: Is there any difference between LEFT function and SET TEXTSIZE? I really like this small but interesting question. The question does not specify the difference between usage or performance. Anyway we will quickly take a look at how TEXTSIZE works. You can run the following script to see how LEFT and SET TEXTSIZE works. USE TempDB GO -- Create TestTable CREATE TABLE MyTable (ID INT, MyText VARCHAR(MAX)) GO INSERT MyTable (ID, MyText) VALUES(1, REPLICATE('1234567890', 100)) GO -- Select Data SELECT ID, MyText FROM MyTable GO -- Using Left SELECT ID, LEFT(MyText, 10) MyText FROM MyTable GO -- Set TextSize SET TEXTSIZE 10; SELECT ID, MyText FROM MyTable; SET TEXTSIZE 2147483647 GO -- Clean up DROP TABLE MyTable GO Now let us see the usage result which we receive from both of the example. If you are going to ask what you should do – I really do not know. I can tell you where I will use either of the same. LEFT seems to be easy to use but again if you like to do extra work related to SET TEXTSIZE go for it. Here is how I will use SET TEXTSIZE. If I am selecting data from in my SSMS for testing or any other non production related work from a large table which has lots of columns with varchar data, I will consider using this statement to reduce the amount of the data which is retrieved in the result set. In simple word, for testing purpose I will use it. On the production server, there should be a specific reason to use the same. Here is my candid opinion – I do not think they can be directly comparable even though both of them give the exact same result. LEFT is applicable only on the column of a single SELECT statement. where it is used but it SET TEXTSIZE applies to all the columns in the SELECT and follow up SELECT statements till the SET TEXTSIZE is not modified again in the session. Uncomparable! I hope this sample example gives you idea how to use SET TEXTSIZE in your daily use. I would like to know your opinion about how and when do you use this feature. Please leave a comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Sweet and Sour Source Control

    A recent survey on SQL Server Central showed that most database developers don't use Source Control. At first glance, it's a surprising thought. Unfortunately, the survey didn't ask about the scale of the database development. If there is only one database developer within a schema, who has an automated approach to regular generation of build scripts, then the need for a formal source control system is questionable....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • TFS: Work Items values from External Databases

    - by javarg
    A common question in TFS forums is how to populate list items from external sources in Work Items. Well, there is not a specific functionality to integrate Work Items with external databases or systems when designing them. Actually, you will need to associate your Work Items fields with Global Lists and then have some automated process update this global list regularly. Download this ImportGlobalList.zip file. I’ve put together a simple class (TfsGlobalList) that you can use to update global list items from a .NET application. You could for example, create a simple Console App and schedule it using Windows Scheduler. This App would query a database and then update a TFS Global List using the provided code. Note: the provided code must be run under an account with modify Global List permissions in TFS. Note: remember to refresh Team Explorer in order to see updates in Work Item field values. Enjoy!  

    Read the article

  • What are unique aspects of a software Lifecycle of an attack/tool on a software vulnerability?

    - by David Kaczynski
    At my local university, there is a small student computing club of about 20 students. The club has several small teams with specific areas of focus, such as mobile development, robotics, game development, and hacking / security. I am introducing some basic agile development concepts to a couple of the teams, such as user stories, estimating complexity of tasks, and continuous integration for version control and automated builds/testing. I am familiar with some basic development life-cycles, such as waterfall, spiral, RUP, agile, etc., but I am wondering if there is such a thing as a software development life-cycle for hacking / breaching security. Surely, hackers are writing computer code, but what is the life-cycle of that code? I don't think that they would be too concerned with maintenance, as once the breach has been found and patched, the code that exploited that breach is useless. I imagine the life-cycle would be something like: Find gap in security Exploit gap in security Procure payload Utilize payload What kind of differences (if any) are there for the development life-cycle of software when the purpose of the product is to breach security?

    Read the article

  • DNS name server error

    - by Danny
    I am getting DNS error on google webmaster tools. And even after testing with this http://dnscheck.pingdom.com/?domain=ansoftsys.com&timestamp=1372108107&view=1 Name Server details Here is a screenshot my DNS management page How to solve this issue? And my DNS error image is below generated from this link http://dnscheck.pingdom.com/?domain=ansoftsys.com&timestamp=1372108107&view=1

    Read the article

  • Checking for cross-site scripting vulnerabilities in Perl web applications

    - by David Scholefield
    I'm putting together some notes for a dev team on how to write secure Perl code - especially taking into account the current OWASP top 10 web application vulnerabilities. For cross-site scripting I've included information on ensuring that all output to the browser is checked and escaped where necessary, but I'm looking for more automated mechanisms that would mean a developer doesn't have to think about every output statement and, potentially, miss one. Perl's 'taint' function sounds like it should be a help because it distrusts all user input, but it doesn't complain on tainted data being output to the browser. Apart from checking all output statements individually (probably by calling a generic sanitizing function) does anyone have any ideas on how Perl can help with this with existing libraries or techniques?

    Read the article

  • On-Demand Webcast: Managing Oracle Exadata with Oracle Enterprise Manager 11g

    - by Scott McNeil
    Watch this on-demand webcast and discover how Oracle Enterprise Manager 11g's unique management capabilities allow you to efficiently manage all stages of Oracle Exadata's lifecycle, from testing applications on Exadata to deployment. You'll learn how to: Maximize and predict database performance Drive down IT operational costs through automation Ensure service quality with proactive management Register today and unlock the potential of Oracle Exadata for your enterprise. Register Now!

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >