Search Results

Search found 26104 results on 1045 pages for 'command pattern'.

Page 194/1045 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • What OO Design to use ( is there a Design Pattern )?

    - by Blundell
    I have two objects that represent a 'Bar/Club' ( a place where you drink/socialise). In one scenario I need the bar name, address, distance, slogon In another scenario I need the bar name, address, website url, logo So I've got two objects representing the same thing but with different fields. I like to use immutable objects, so all the fields are set from the constructor. One option is to have two constructors and null the other fields i.e: class Bar { private final String name; private final Distance distance; private final Url url; public Bar(String name, Distance distance){ this.name = name; this.distance = distance; this.url = null; } public Bar(String name, Url url){ this.name = name; this.distance = null; this.url = url; } // getters } I don't like this as you would have to null check when you use the getters In my real example the first scenario has 3 fields and the second scenario has about 10, so it would be a real pain having two constructors, the amount of fields I would have to declare null and then when the object are in use you wouldn't know which Bar you where using and so what fields would be null and what wouldn't. What other options do I have? Two classes called BarPreview and Bar? Some type of inheritance / interface? Something else that is awesome?

    Read the article

  • What's is the point of PImpl pattern while we can use interface for same purpose in C++?

    - by ZijingWu
    I see a lot of source code which using PIMPL idiom in C++. I assume Its purposes are hidden the private data/type/implementation, so it can resolve dependence, and then reduce compile time and header include issue. But interface class in C++ also have this capability, it can also used to hidden data/type and implementation. And to hidden let the caller just see the interface when create object, we can add an factory method in it declaration in interface header. The comparison is: Cost: The interface way cost is lower, because you doesn't even need to repeat the public wrapper function implementation void Bar::doWork() { return m_impl->doWork(); }, you just need to define the signature in the interface. Well understand: The interface technology is more well understand by every C++ developer. Performance: Interface way performance not worse than PIMPL idiom, both an extra memory access. I assume the performance is same. Following is the pseudocode code to illustrate my question: // Forward declaration can help you avoid include BarImpl header, and those included in BarImpl header. class BarImpl; class Bar { public: // public functions void doWork(); private: // You doesn't need to compile Bar.cpp after change the implementation in BarImpl.cpp BarImpl* m_impl; }; The same purpose can be implement using interface: // Bar.h class IBar { public: virtual ~IBar(){} // public functions virtual void doWork() = 0; }; // to only expose the interface instead of class name to caller IBar* createObject(); So what's the point of PIMPL?

    Read the article

  • What Java data structure/design pattern best models this object, considering it would perform these methods?

    - by zundarz
    Methods: 1. getDistance(CityA,CityB) // Returns distance between two cities 2. getCitiesInRadius(CityA,integer) // Returns cities within a given distance of another city 3. getCitiesBeyondRadius(CityA,integer) //Returns cities beyond a given distance of another city 4. getRemoteDestinations(integer) // Returns all city pairs greater than x distance of each other 5. getLocalDestinations(integer) //Returns all city pairs within x distance of each other

    Read the article

  • Which is a better design pattern for a database wrapper: Save as you go or Save when your done?

    - by izuriel
    I know this is probably a bad way to ask this question. I was unable to find another question that addressed this. The full question is this: We're producing a wrapper for a database and have two different viewpoints on managing data with the wrapper. The first is that all changes made to a data object in code must be persisted in the database by calling a "save" method to actually save the changes. The other side is that these changes should be save as they are made, so if I change a property it's saved, I change another it's save as well. What are the pros/cons of either choice and which is the "proper" way to manage the data?

    Read the article

  • WHY Google does not ban these sites using this SEO pattern? [on hold]

    - by saddam.bg
    I have seen some sites using a different kind of SEO to promote copyrighted materials such as movies. They also have submitted their site to Google webmaster tools but still now did not get banned. Their Alexa ranks are 7000 or less. On the other hand I have run 5 movie affiliate sites and all of them got banned by Google within a short period of time. I have copied the url of the homepage of solarmovie.me and pasted it on the google search and instead of the homepage url I have seen that their category or tag shows as the homepage (www.solarmovie.me/watch-category/hollyw... Now is solarmovie.me publishing its posts as a single page or something else? I tried to find out what kind of SEO or coding that was, but I couldn't since I have very little knowledge about coding. Also I have seen the same thing with ALLUC.TO in google search (www.alluc.to/popular-links.html). Could anyone please help with the SEO of this kind so that I don't get banned by google frequently or index removed. All SEO webmaster i need your help!!!! Please give me some good tips for this type of SEO. Thank You Very Much

    Read the article

  • Diagnosing ADF Mobile iOS deployment problems

    - by Chris Muir
    From time to time I encounter customers who have taken possession of a brand new Apple Mac, have that excited "I've just spent more on a computer then I ever wanted to but it's okay" crazy gleam in their eye, but on pre-loading all the necessary software for Oracle's ADF Mobile to start their mobile campaign, following Oracle's setup instructions and deploying their first app to Apple's XCode iPhone Simulator they hit this error message in the JDeveloper Log-Deployment window: [01:36:46 PM] Deployment cancelled. [01:36:46 PM] ----  Deployment incomplete  ----. [01:36:46 PM] Failed to build the iOS application bundle. [01:36:46 PM] Deployment failed due to one or more errors returned by '/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild'.  The following is a summary of the returned error(s): Command-line execution failed (Return code: 69) "Oh, return code 69, I know that well" I hear you say.  Admittedly the error code is less than useful besides drawing some titters from the peanut gallery. Before explaining what's gone wrong, I think it's useful to teach customers how to diagnose these issues themselves.  When ADF Mobile commences a deployment, be it to Apple's iOS or Google's Android platforms, JDeveloper and ADF Mobile do a good job in the Log window of showing you what the deployment process entails.  In the case of deploying to iOS the log window will literally include the XCode commands executed to complete the deployment cycle. As example here's the log output that was produced before the error message was raised.... take the opportunity to read this line by line and note the command line calls highlighted in blue: (Note some of the following lines have been split over multiple lines to suit reading on this blog, each original line is preceded by a timestamp. Ensure to check the exact commands from JDev) [01:36:33 PM] Target platform is (iOS). [01:36:33 PM] Beginning deployment of ADF Mobile application 'LayoutDemo' to iOS using profile 'IOS_MOBILE_NATIVE_archive1'. [01:36:34 PM] Command-line executed: [/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild, -version] [01:36:34 PM] Command-line execution succeeded. [01:36:34 PM] Running dependency analysis... [01:36:34 PM] Building... [01:36:34 PM] Deploying 3 profiles... [01:36:35 PM] Wrote Archive Module to /Users/chris/fmw/jdeveloper/jdev/extensions/ oracle.adf.mobile/Samples/PublicSamples/LayoutDemo/ApplicationController/ deploy/ApplicationController.jar [01:36:35 PM] WARNING: No Resource Catalog enabled ADF components found to package [01:36:36 PM] Wrote Archive Module to /Users/chris/fmw/jdeveloper/jdev/extensions/ oracle.adf.mobile/Samples/PublicSamples/LayoutDemo/ViewController/ deploy/ViewController.jar [01:36:36 PM] Verifying existence of the .adf source directory of the ADF Mobile application... [01:36:36 PM] Verifying Application Controller project exists... [01:36:36 PM] Verifying application dependencies... [01:36:36 PM] The application may not function correctly because the following dependent libraries are missing: /Users/chris/jdev/jdeveloper/jdeveloper/jdev/extensions/oracle.adf.mobile/ lib/adfmf.springboard.jar [01:36:36 PM] Verifying project dependencies... [01:36:36 PM] Validating application XML files... [01:36:36 PM] Validating XML files in project ApplicationController... [01:36:36 PM] Validating XML files in project ViewController... [01:36:40 PM] Copying common javascript files... [01:36:41 PM] Copying FARs to the ADF Mobile Framework application... [01:36:41 PM] Extracting Feature Archive file, "ApplicationController.jar" to deployment folder, "ApplicationController". [01:36:42 PM] Extracting Feature Archive file, "ViewController.jar" to deployment folder, "ViewController". [01:36:42 PM] Deploying skinning files... [01:36:43 PM] Copying the CVM SDK files built for the x86 processor... [01:36:43 PM] Copying the CVM JDK files built for the x86 processor... [01:36:43 PM] Command-line executed: [cp, -R, -p, /Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/iOS/jvmti/x86/, /Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/ Samples/PublicSamples/ LayoutDemo/deploy/IOS_MOBILE_NATIVE_archive1/temporary_xcode_project/lib] [01:36:43 PM] Command-line execution succeeded. [01:36:43 PM] Command-line executed: [cp, -R, -p, /Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/iOS/jvmti/jar/, /Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/Samples/ PublicSamples/LayoutDemo/deploy/IOS_MOBILE_NATIVE_archive1/ temporary_xcode_project/lib] [01:36:43 PM] Command-line execution succeeded. [01:36:43 PM] Copying security related files to the ADF Mobile Framework application... [01:36:44 PM] Command-line executed from path: /Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/Samples/ PublicSamples/LayoutDemo/deploy/IOS_MOBILE_NATIVE_archive1/temporary_xcode_project/ [01:36:44 PM] Command-line executed: /Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild clean install -configuration Debug -sdk /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/ Developer/SDKs/iPhoneSimulator6.1.sdk DSTROOT=/Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/Samples/ PublicSamples/LayoutDemo/deploy/IOS_MOBILE_NATIVE_archive1/Destination_Root/ IPHONEOS_DEPLOYMENT_TARGET=5.0 TARGETED_DEVICE_FAMILY=1,2 PRODUCT_NAME=LayoutDemo ADD_SETTINGS_BUNDLE=NO As you can see when we move from JDeveloper undertaking its work, it then passes the code off in the last few lines for Apple's XCode to assemble and deploy the required .ipa file.  From the original error message which followed this complaining about xcodebuild failing with return code 69, we can quickly see the exact command line used to call xcodebuild. As this is the exact command line call with all its options, you're free to open a Terminal window in Mac OSX and execute the same command by simply copying and pasting the command line. And via this you'll then find out what return code actually 69 means.  Unfortunately it's not that exciting. For Macs that have just been installed and configured with XCode, XCode (and for that matter iTunes) which is required by ADF Mobile to deploy must have been run at least once before hand on your brand new Mac (to be clear that's once ever, not once every restart). On doing so you will be presented with a license agreement from Apple that you must accept. Only once you've done this will the command line calls work.  They're currently failing as you haven't accepted the legal terms and conditions. (arguably you an also accept the terms and conditions from the command line too, but ADF Mobile cannot do this on your behalf, so it's just easier to open the tools and confirm the legal requirements that way). Putting aside the error code and its meaning, watching the log window, watching what commands are executed, learning what they do, this will assist you to diagnose issues yourself and solve these sort of issues more relatively quickly.  From my perspective as an Oracle Product Manager, it allows me to say "this is the stuff you don't need to worry about when you use ADF Mobile when it's configured correctly" .... as you can see my salesman qualities shine through. For anyone who is happily using ADF Mobile on a Mac and wondering why you didn't hit these issues, it's quite likely that you already accepted the license conditions before deploying via ADF Mobile.  For instance, though I'm not a fan of iTunes itself, iTunes was one of the first things I loaded on my Mac to access my Justin Bieber albums. Image courtesy of winnond / FreeDigitalPhotos.net

    Read the article

  • Design by Contract with Microsoft .Net Code Contract

    - by Fredrik N
    I have done some talks on different events and summits about Defensive Programming and Design by Contract, last time was at Cornerstone’s Developer Summit 2010. Next time will be at SweNug (Sweden .Net User Group). I decided to write a blog post about of some stuffs I was talking about. Users are a terrible thing! Protect your self from them ”Human users have a gift for doing the worst possible thing at the worst possible time.” – Michael T. Nygard, Release It! The kind of users Michael T. Nygard are talking about is the users of a system. We also have users that uses our code, the users I’m going to focus on is the users of our code. Me and you and another developers. “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” – Martin Fowler Good programmers also writes code that humans know how to use, good programmers also make sure software behave in a predictable manner despise inputs or user actions. Design by Contract   Design by Contract (DbC) is a way for us to make a contract between us (the code writer) and the users of our code. It’s about “If you give me this, I promise to give you this”. It’s not about business validations, that is something completely different that should be part of the domain model. DbC is to make sure the users of our code uses it in a correct way, and that we can rely on the contract and write code in a way where we know that the users will follow the contract. It will make it much easier for us to write code with a contract specified. Something like the following code is something we may see often: public void DoSomething(Object value) { value.DoIKnowThatICanDoThis(); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Where “value” can be uses directly or passed to other methods and later be used. What some of us can easily forget here is that the “value” can be “null”. We will probably not passing a null value, but someone else that uses our code maybe will do it. I think most of you (including me) have passed “null” into a method because you don’t know if the argument need to be specified to a valid value etc. I bet most of you also have got the “Null reference exception”. Sometimes this “Null reference exception” can be hard and take time to fix, because we need to search among our code to see where the “null” value was passed in etc. Wouldn’t it be much better if we can as early as possible specify that the value can’t not be null, so the users of our code also know it when the users starts to use our code, and before run time execution of the code? This is where DbC comes into the picture. We can use DbC to specify what we need, and by doing so we can rely on the contract when we write our code. So the code above can actually use the DoIKnowThatICanDoThis() method on the value object without being worried that the “value” can be null. The contract between the users of the code and us writing the code, says that the “value” can’t be null.   Pre- and Postconditions   When working with DbC we are specifying pre- and postconditions.  Precondition is a condition that should be met before a query or command is executed. An example of a precondition is: “The Value argument of the method can’t be null”, and we make sure the “value” isn’t null before the method is called. Postcondition is a condition that should be met when a command or query is completed, a postcondition will make sure the result is correct. An example of a postconditon is “The method will return a list with at least 1 item”. Commands an Quires When using DbC, we need to know what a Command and a Query is, because some principles that can be good to follow are based on commands and queries. A Command is something that will not return anything, like the SQL’s CREATE, UPDATE and DELETE. There are two kinds of Commands when using DbC, the Creation commands (for example a Constructor), and Others. Others can for example be a Command to add a value to a list, remove or update a value etc. //Creation commands public Stack(int size) //Other commands public void Push(object value); public void Remove(); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   A Query, is something that will return something, for example an Attribute, Property or a Function, like the SQL’s SELECT.   There are two kinds of Queries, the Basic Queries  (Quires that aren’t based on another queries), and the Derived Queries, queries that is based on another queries. Here is an example of queries of a Stack: //Basic Queries public int Count; public object this[int index] { get; } //Derived Queries //Is related to Count Query public bool IsEmpty() { return Count == 0; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To understand about some principles that are good to follow when using DbC, we need to know about the Commands and different Queries. The 6 Principles When working with DbC, it’s advisable to follow some principles to make it easier to define and use contracts. The following DbC principles are: Separate commands and queries. Separate basic queries from derived queries. For each derived query, write a postcondition that specifies what result will be returned, in terms of one or more basic queries. For each command, write a postcondition that specifies the value of every basic query. For every query and command, decide on a suitable precondition. Write invariants to define unchanging properties of objects. Before I will write about each of them I want you to now that I’m going to use .Net 4.0 Code Contract. I will in the rest of the post uses a simple Stack (Yes I know, .Net already have a Stack class) to give you the basic understanding about using DbC. A Stack is a data structure where the first item in, will be the first item out. Here is a basic implementation of a Stack where not contract is specified yet: public class Stack { private object[] _array; //Basic Queries public uint Count; public object this[uint index] { get { return _array[index]; } set { _array[index] = value; } } //Derived Queries //Is related to Count Query public bool IsEmpty() { return Count == 0; } //Is related to Count and this[] Query public object Top() { return this[Count]; } //Creation commands public Stack(uint size) { Count = 0; _array = new object[size]; } //Other commands public void Push(object value) { this[++Count] = value; } public void Remove() { this[Count] = null; Count--; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Note: The Stack is implemented in a way to demonstrate the use of Code Contract in a simple way, the implementation may not look like how you would implement it, so don’t think this is the perfect Stack implementation, only used for demonstration.   Before I will go deeper into the principles I will simply mention how we can use the .Net Code Contract. I mention before about pre- and postcondition, is about “Require” something and to “Ensure” something. When using Code Contract, we will use a static class called “Contract” and is located in he “System.Diagnostics.Contracts” namespace. The contract must be specified at the top or our member statement block. To specify a precondition with Code Contract we uses the Contract.Requires method, and to specify a postcondition, we uses the Contract.Ensure method. Here is an example where both a pre- and postcondition are used: public object Top() { Contract.Requires(Count > 0, "Stack is empty"); Contract.Ensures(Contract.Result<object>() == this[Count]); return this[Count]; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The contract above requires that the Count is greater than 0, if not we can’t get the item at the Top of a Stack. We also Ensures that the results (By using the Contract.Result method, we can specify a postcondition that will check if the value returned from a method is correct) of the Top query is equal to this[Count].   1. Separate Commands and Queries   When working with DbC, it’s important to separate Command and Quires. A method should either be a command that performs an Action, or returning information to the caller, not both. By asking a question the answer shouldn’t be changed. The following is an example of a Command and a Query of a Stack: public void Push(object value) public object Top() .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The Push is a command and will not return anything, just add a value to the Stack, the Top is a query to get the item at the top of the stack.   2. Separate basic queries from derived queries There are two different kinds of queries,  the basic queries that doesn’t rely on another queries, and derived queries that uses a basic query. The “Separate basic queries from derived queries” principle is about about that derived queries can be specified in terms of basic queries. So this principles is more about recognizing that a query is a derived query or a basic query. It will then make is much easier to follow the other principles. The following code shows a basic query and a derived query: //Basic Queries public uint Count; //Derived Queries //Is related to Count Query public bool IsEmpty() { return Count == 0; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   We can see that IsEmpty will use the Count query, and that makes the IsEmpty a Derived query.   3. For each derived query, write a postcondition that specifies what result will be returned, in terms of one or more basic queries.   When the derived query is recognize we can follow the 3ed principle. For each derived query, we can create a postcondition that specifies what result our derived query will return in terms of one or more basic queries. Remember that DbC is about contracts between the users of the code and us writing the code. So we can’t use demand that the users will pass in a valid value, we must also ensure that we will give the users what the users wants, when the user is following our contract. The IsEmpty query of the Stack will use a Count query and that will make the IsEmpty a Derived query, so we should now write a postcondition that specified what results will be returned, in terms of using a basic query and in this case the Count query, //Basic Queries public uint Count; //Derived Queries public bool IsEmpty() { Contract.Ensures(Contract.Result<bool>() == (Count == 0)); return Count == 0; } The Contract.Ensures is used to create a postcondition. The above code will make sure that the results of the IsEmpty (by using the Contract.Result to get the result of the IsEmpty method) is correct, that will say that the IsEmpty will be either true or false based on Count is equal to 0 or not. The postcondition are using a basic query, so the IsEmpty is now following the 3ed principle. We also have another Derived Query, the Top query, it will also need a postcondition and it uses all basic queries. The Result of the Top method must be the same value as the this[] query returns. //Basic Queries public uint Count; public object this[uint index] { get { return _array[index]; } set { _array[index] = value; } } //Derived Queries //Is related to Count and this[] Query public object Top() { Contract.Ensures(Contract.Result<object>() == this[Count]); return this[Count]; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   4. For each command, write a postcondition that specifies the value of every basic query.   For each command we will create a postconditon that specifies the value of basic queries. If we look at the Stack implementation we will have three Commands, one Creation command, the Constructor, and two others commands, Push and Remove. Those commands need a postcondition and they should include basic query to follow the 4th principle. //Creation commands public Stack(uint size) { Contract.Ensures(Count == 0); Count = 0; _array = new object[size]; } //Other commands public void Push(object value) { Contract.Ensures(Count == Contract.OldValue<uint>(Count) + 1); Contract.Ensures(this[Count] == value); this[++Count] = value; } public void Remove() { Contract.Ensures(Count == Contract.OldValue<uint>(Count) - 1); this[Count] = null; Count--; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   As you can see the Create command will Ensures that Count will be 0 when the Stack is created, when a Stack is created there shouldn’t be any items in the stack. The Push command will take a value and put it into the Stack, when an item is pushed into the Stack, the Count need to be increased to know the number of items added to the Stack, and we must also make sure the item is really added to the Stack. The postconditon of the Push method will make sure the that old value of the Count (by using the Contract.OldValue we can get the value a Query has before the method is called)  plus 1 will be equal to the Count query, this is the way we can ensure that the Push will increase the Count with one. We also make sure the this[] query will now contain the item we pushed into the Stack. The Remove method must make sure the Count is decreased by one when the top item is removed from the Stack. The Commands is now following the 4th principle, where each command now have a postcondition that used the value of basic queries. Note: The principle says every basic Query, the Remove only used one Query the Count, it’s because this command can’t use the this[] query because an item is removed, so the only way to make sure an item is removed is to just use the Count query, so the Remove will still follow the principle.   5. For every query and command, decide on a suitable precondition.   We have now focused only on postcondition, now time for some preconditons. The 5th principle is about deciding a suitable preconditon for every query and command. If we starts to look at one of our basic queries (will not go through all Queries and commands here, just some of them) the this[] query, we can’t pass an index that is lower then 1 (.Net arrays and list are zero based, but not the stack in this blog post ;)) and the index can’t be lesser than the number of items in the stack. So here we will need a preconditon. public object this[uint index] { get { Contract.Requires(index >= 1); Contract.Requires(index <= Count); return _array[index]; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Think about the Contract as an documentation about how to use the code in a correct way, so if the contract could be specified elsewhere (not part of the method body), we could simply write “return _array[index]” and there is no need to check if index is greater or lesser than Count, because that is specified in a “contract”. The implementation of Code Contract, requires that the contract is specified in the code. As a developer I would rather have this contract elsewhere (Like Spec#) or implemented in a way Eiffel uses it as part of the language. Now when we have looked at one Query, we can also look at one command, the Remove command (You can see the whole implementation of the Stack at the end of this blog post, where precondition is added to more queries and commands then what I’m going to show in this section). We can only Remove an item if the Count is greater than 0. So we can write a precondition that will require that Count must be greater than 0. public void Remove() { Contract.Requires(Count > 0); Contract.Ensures(Count == Contract.OldValue<uint>(Count) - 1); this[Count] = null; Count--; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   6. Write invariants to define unchanging properties of objects.   The last principle is about making sure the object are feeling great! This is done by using invariants. When using Code Contract we can specify invariants by adding a method with the attribute ContractInvariantMethod, the method must be private or public and can only contains calls to Contract.Invariant. To make sure the Stack feels great, the Stack must have 0 or more items, the Count can’t never be a negative value to make sure each command and queries can be used of the Stack. Here is our invariant for the Stack object: [ContractInvariantMethod] private void ObjectInvariant() { Contract.Invariant(Count >= 0); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Note: The ObjectInvariant method will be called every time after a Query or Commands is called. Here is the full example using Code Contract:   public class Stack { private object[] _array; //Basic Queries public uint Count; public object this[uint index] { get { Contract.Requires(index >= 1); Contract.Requires(index <= Count); return _array[index]; } set { Contract.Requires(index >= 1); Contract.Requires(index <= Count); _array[index] = value; } } //Derived Queries //Is related to Count Query public bool IsEmpty() { Contract.Ensures(Contract.Result<bool>() == (Count == 0)); return Count == 0; } //Is related to Count and this[] Query public object Top() { Contract.Requires(Count > 0, "Stack is empty"); Contract.Ensures(Contract.Result<object>() == this[Count]); return this[Count]; } //Creation commands public Stack(uint size) { Contract.Requires(size > 0); Contract.Ensures(Count == 0); Count = 0; _array = new object[size]; } //Other commands public void Push(object value) { Contract.Requires(value != null); Contract.Ensures(Count == Contract.OldValue<uint>(Count) + 1); Contract.Ensures(this[Count] == value); this[++Count] = value; } public void Remove() { Contract.Requires(Count > 0); Contract.Ensures(Count == Contract.OldValue<uint>(Count) - 1); this[Count] = null; Count--; } [ContractInvariantMethod] private void ObjectInvariant() { Contract.Invariant(Count >= 0); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Summary By using Design By Contract we can make sure the users are using our code in a correct way, and we must also make sure the users will get the expected results when they uses our code. This can be done by specifying contracts. To make it easy to use Design By Contract, some principles may be good to follow like the separation of commands an queries. With .Net 4.0 we can use the Code Contract feature to specify contracts.

    Read the article

  • How come the ls command prints in multiple columns on tty but only one column everywhere else?

    - by David Lou
    Even after using Unix-like OSes for a couple years, this behaviour still baffles me. When I use the ls command in a directory that has lots of files, the output is usually nicely formatted into multiple columns. Here's an example: $ ls a.txt C.txt f.txt H.txt k.txt M.txt p.txt R.txt u.txt W.txt z.txt A.txt d.txt F.txt i.txt K.txt n.txt P.txt s.txt U.txt x.txt Z.txt b.txt D.txt g.txt I.txt l.txt N.txt q.txt S.txt v.txt X.txt B.txt e.txt G.txt j.txt L.txt o.txt Q.txt t.txt V.txt y.txt c.txt E.txt h.txt J.txt m.txt O.txt r.txt T.txt w.txt Y.txt However, if I try to redirect the output to a file, or pipe it to another command, only a single column appears in the output. Using the same example directory as above, here's what I get when I pipe ls to wc: $ ls | wc 52 52 312 In other words, wc thinks there are 52 lines, even though the output to the terminal has only 5. I haven't observed this behaviour in any other command. Would you like to explain this to me?

    Read the article

  • (solved) `ssh foo "<command/>"` not loading remote aliases?

    - by TomRoche
    summary: Why does this fail $ ssh foo 'R --version | head -n 1' bash: R: command not found but this succeeds $ ssh foo 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh foo 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux86_64/bin/R' $ ssh foo '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) ? details: I am a (rootless) user on 2 clusters. One uses environment modules, so any given server on that cluster can provide (via module add) pretty much the same resources. The other cluster, on which I must also unfortunately work, has servers managed individually, so I get in the habit of doing, e.g., EXEC_NAME='whatever' for S in 'foo' 'bar' 'baz' ; do ssh ${SERVER} "${EXEC_NAME} --version" done This works fine for packages installed normally/consistently, but often (for reasons unknown to me) packages are not: e.g. (compare alias below to alias above), $ ssh bar 'R --version | head -n 1' bash: R: command not found $ ssh bar 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh bar 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux/bin/R' $ ssh bar '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) Using aliases copes well with these install differences when I interactively shell into the server, but fails when I try to script ssh commands (as above); i.e., # interactively $ ssh foo ... foo> R --version calls my alias for R on remote host=foo, but # scripting $ ssh foo 'R --version' doesn't. What do I need to do to make ssh foo "<command/>" load my aliases on the remote host?

    Read the article

  • How can I totally flatten a PDF in Mac OS on the command line?

    - by Matthew Leingang
    I use Mac OS X Snow Leopard. I have a PDF with form fields, annotations, and stamps on it. I would like to freeze (or "flatten") that PDF so that the form fields can't be changed and the annotations/stamps are no longer editable. Since I actually have many of these PDFs, I want to do this automatically on the command line. Some things I've tried/considered, with their degree of success: Open in Preview and Print to File. This creates a totally flat PDF without changing the file size. The only way to automate seems to be to write a kludgy UI-based AppleScript, though, which I've been trying to avoid. Open in Acrobat Pro and use a JavaScript function to flatten. Again, not sure how to automate this on the command line. Use pdftk with the flatten option. But this only flattens form fields, not stamps and other annotations. Use cupsfilter which can create PDF from many file formats. Like pdftk this flattened only the form fields. Use cups-pdf to hook into the Mac's printserver and save a PDF file instead of print. I used the macports version. The resulting file is flat but huge. I tried this on an 8MB file; the flattened PDF was 358MB! Perhaps this can be combined with a ghostscript call as in Ubuntu Tip:Howto reduce PDF file size from command line. Any other suggestions would be appreciated.

    Read the article

  • VNC Server that can be used from command line?

    - by jesusiniesta
    I'm looking for a replacement for a custom vnc server that we have been using in my company for a long time. I need a simple executable that can be run from command line by an IT Support software without the user noticing it (our application will warn the user, we don't want him to see we are using that VNC sever). I need it to support Windows and preferably also OSX. The only option I've found is UltraVNC, but I can't configure it from command line to accept loopback connections without authentication. We have already a whole VNC Viewer + VNC Repeater + Bouncers architecture, and the only missing piece is the VNC Server. Do you know any solution you could suggest me? I'm afraid I'll end up developing a new VNC server myself, may be based on an open source one. EDIT: When I said I don't want the user to notice this VNC server, I should have added that I don't want him even noticing the installation. So better if it can be installed silently or can be executed as a portable executalbe (for instance, ultravnc can be installed and ran as a service from command line, or simply executed quietly, with only a notification icon; its problem is that I can't run it without authentication).

    Read the article

  • Simplifying for-if messes with better structure?

    - by HH
    # Description: you are given a bitwise pattern and a string # you need to find the number of times the pattern matches in the string # any one liner or simple pythonic solution? import random def matchIt(yourString, yourPattern): """find the number of times yourPattern occurs in yourString""" count = 0 matchTimes = 0 # How can you simplify the for-if structures? for coin in yourString: #return to base if count == len(pattern): matchTimes = matchTimes + 1 count = 0 #special case to return to 2, there could be more this type of conditions #so this type of if-conditionals are screaming for a havoc if count == 2 and pattern[count] == 1: count = count - 1 #the work horse #it could be simpler by breaking the intial string of lenght 'l' #to blocks of pattern-length, the number of them is 'l - len(pattern)-1' if coin == pattern[count]: count=count+1 average = len(yourString)/matchTimes return [average, matchTimes] # Generates the list myString =[] for x in range(10000): myString= myString + [int(random.random()*2)] pattern = [1,0,0] result = matchIt(myString, pattern) print("The sample had "+str(result[1])+" matches and its size was "+str(len(myString))+".\n" + "So it took "+str(result[0])+" steps in average.\n" + "RESULT: "+str([a for a in "FAILURE" if result[0] != 8])) # Sample Output # # The sample had 1656 matches and its size was 10000. # So it took 6 steps in average. # RESULT: ['F', 'A', 'I', 'L', 'U', 'R', 'E']

    Read the article

  • incapsulation of a code inmatlab

    - by user531225
    my code is pathname=uigetdir; filename=uigetfile('*.txt','choose a file name.'); data=importdata(filename); element= (data.data(:,10)); in_array=element; pattern= [1 3]; locations = cell(1, numel(pattern)); for p = 1:(numel(pattern)) locations{p} = find(in_array == pattern(p)); end idx2 = []; for p = 1:numel(locations{1}) start_value = locations{1}(p); for q = 2:numel(locations) found = true; if (~any((start_value + q - 1) == locations{q})) found = false; break; end end if (found) idx2(end + 1) = locations{1}(p); end end [m2,n2]=size(idx2) res_name= {'one' 'two'}; res=[n n2]; In this code I finding a pattern in one of the column of my data file and counting how many times it's repeated. I have like 200 files that I want to do the same with them but unfotunatlly I'm stuck. this is what I have added so far pathname=uigetdir; files=dir('*.txt'); for k=1:length(files) filename=files(k).name; data(k)=importdata(files(k).name); element{k}=data(1,k).data(:,20); in_array=element;pattern= [1 3]; locations = cell(1, numel(pattern)); for p = 1:(numel(pattern)) locations{p} = find(in_array{k}== pattern(p)); end idx2{k} = []; how can I continue this code..??

    Read the article

  • comparision of strings

    - by EmiLazE
    i am writing a program, that simulates game mastermind. but i am struggling on how to compare guessed pattern to key pattern. the game conditions are a little bit changed: patterns consist of letters. if an element of guessed pattern is equal to element of key pattern, and also index is equal, then print b. if an element of guessed pattern is equal to element of key pattern, but index is not, then print w. if an element of guessed pattern is not equal to element of key pattern, print dot. in feedback about guessed pattern, 'b's come first, 'w's second, '.'s last. my problem is that i cannot think of a way totally satisfies the answer. for (i=0; i<patternlength; i++) { for (x=0; x<patternlength; x++) { if (guess[i]==key[x] && i==x) printf("b"); if (guess[i]==key[x] && i!=x) printf("w"); if (guess[i]!=key[x]) printf("."); } }

    Read the article

  • How can this C and PHP programmer learn Ruby and Rails?

    - by Winston
    I came from a C, php and bash background, it was easy to learn because they all have the same C structure, which I can associate with what I already know. Then 2 years ago I learned Python and I learned it quite well, Python is easier for me to learn than Ruby. Then since last year, I was trying to learn Ruby, then Rails, and I admit, until now I still couldn't get it, the irony is that those are branded as easy to learn, but for a seasoned programmer like me, I just couldn't associate it with what I learned before, I have 2 books on both Ruby and Rails, and when I'm reading it nothing is absorbed into my mind, and I'm close to giving up... In ruby, I'm having a hard time grasping the concepts of blocks, and why there's @variables that can be accessed by other functions, and what does $variable and :variable do? And in Rails, why there's function like this_is_another_function_that_do_this, so thus ruby, is it just a naming convention or it's auto-generated with thisvariable _can_do_this_function. I'm still puzzled that where all those magic concepts and things came from? And now, 1 year of trying and absorbing, but still no progress... Edit: To summarize: How can I learn about blocks, and how can it be related to concepts from PHP/C? Variables, what does does it mean when a variable is prefixed with: @ $ : "Magic concepts", suchs as rails declarations of Records, what happens behind the scenes when I write has_one X OK so, bear with me with my confusion, at least I'm honest with myself, and it's over a year now since I first trying to learn ruby, and I'm not getting younger.. so I learned this in Bash/C/PHP solve_problem($problem) { if [ -e $problem == "trivial" ]; then write_solution(); else breakdown_problem_into_N_subproblems(\; define_relationship_between_subproblems; for i in $( command $each_subproblem ); do solve_problem $i done fi } write_solution(problem) { some_solution=$(command <parameters> "input" | command); command | command $some_solution > output_solved_problem_to_file } breakdown_problem_into_N_subproblems($problems) { for i in $problems; do command $i | command > i_can_output_a_file_right_away done } define_relationship_between_subproblems($problems) { if [ -e $problem == "relationship" ]; then relationship=$(command; command | command; command;) elsif [ -e $problem == "another_relationship" ]; relationship=$(command; command | command; command;) fi } In C/PHP is something like this solve_problem(problem) { if (problem == trivial) write_solution; else { breakdown_problem_into_N_subproblems; define_relationship_between_subproblems; for (each_subproblem) solve_problems(subproblem); } } And now, I just couldn't connect the dots with Ruby, |b|{ blocks }, using @variables, :variables, and variables_with_this_things..

    Read the article

  • Is it worthwhile to implement observer pattern in PHP?

    - by Extrakun
    I have been meaning to make use of design pattern in PHP, such as the observer pattern, but that I have to recreate the observers' relationship each time the page is loaded pains me. As references are saved as a new concrete objects in session, there is no way to preserve relationships between subscribers and their observers unless you use a GUID or some other properties to form a lookup, and store that property instead. With the cost of recreating the relationships each time a page is loaded, is it worthwhile to use design patterns such as observers in PHP, compared to having a clean design? Any real-world experience to share?

    Read the article

  • What is a good pattern for binding a collection of objects coming from WCF, in Silverlight?

    - by Krishna
    Hi there, I've got a question about a Silverlight WCF Databinding pattern: There are many examples about how to bind data using {Binding} expressions in XAML, how to make async calls to a WCF service, set the DataContext property of a element in the UI, how to use ObservableCollections and INotifyPropertyChanged, INotifyCollectionChanged and so on. Background: I'm using the MVVM pattern, and have a Silverlight ItemsControl, whose ItemsSource is set to an ObservableCollection property on my ViewModel object. My view is of course the XAML which has the {Binding}. Say the model object is called 'Metric'. My ViewModel periodically makes calls to a WCF service that returns ObservableCollection. MetricInfo is the data transfer object (DTO). My question is two-fold: Is there any way to avoid copying each property of MetricInfo to the model class - Metric? When the WCF calls completes, is there any way to make sure I sync the items which are in both my local ObservableCollection and the result of the WCF call - without having to first clear out all the items in the local collection and then add all the ones from the WCF call result? thanks, Krishna

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • SQL ADO.NET shortcut extensions (old school!)

    - by Jeff
    As much as I love me some ORM's (I've used LINQ to SQL quite a bit, and for the MSDN/TechNet Profile and Forums we're using NHibernate more and more), there are times when it's appropriate, and in some ways more simple, to just throw up so old school ADO.NET connections, commands, readers and such. It still feels like a pain though to new up all the stuff, make sure it's closed, blah blah blah. It's pretty much the least favorite task of writing data access code. To minimize the pain, I have a set of extension methods that I like to use that drastically reduce the code you have to write. Here they are... public static void Using(this SqlConnection connection, Action<SqlConnection> action) {     connection.Open();     action(connection);     connection.Close(); } public static SqlCommand Command(this SqlConnection connection, string sql){    var command = new SqlCommand(sql, connection);    return command;}public static SqlCommand AddParameter(this SqlCommand command, string parameterName, object value){    command.Parameters.AddWithValue(parameterName, value);    return command;}public static object ExecuteAndReturnIdentity(this SqlCommand command){    if (command.Connection == null)        throw new Exception("SqlCommand has no connection.");    command.ExecuteNonQuery();    command.Parameters.Clear();    command.CommandText = "SELECT @@IDENTITY";    var result = command.ExecuteScalar();    return result;}public static SqlDataReader ReadOne(this SqlDataReader reader, Action<SqlDataReader> action){    if (reader.Read())        action(reader);    reader.Close();    return reader;}public static SqlDataReader ReadAll(this SqlDataReader reader, Action<SqlDataReader> action){    while (reader.Read())        action(reader);    reader.Close();    return reader;} It has been awhile since I've really revisited these, so you will likely find opportunity for further optimization. The bottom line here is that you can chain together a bunch of these methods to make a much more concise database call, in terms of the code on your screen, anyway. Here are some examples: public Dictionary<string, string> Get(){    var dictionary = new Dictionary<string, string>();    _sqlHelper.GetConnection().Using(connection =>        connection.Command("SELECT Setting, [Value] FROM Settings")            .ExecuteReader()            .ReadAll(r => dictionary.Add(r.GetString(0), r.GetString(1))));    return dictionary;} or... public void ChangeName(User user, string newName){    _sqlHelper.GetConnection().Using(connection =>         connection.Command("UPDATE Users SET Name = @Name WHERE UserID = @UserID")            .AddParameter("@Name", newName)            .AddParameter("@UserID", user.UserID)            .ExecuteNonQuery());} The _sqlHelper.GetConnection() is just some other code that gets a connection object for you. You might have an even cleaner way to take that step out entirely. This looks more fluent, and the real magic sauce for me is the reader bits where you can put any kind of arbitrary method in there to iterate over the results.

    Read the article

  • Is there a recommended way to use the Observer pattern in MVP using GWT?

    - by Tomislav Nakic-Alfirevic
    I am thinking about implementing a user interface according to the MVP pattern using GWT, but have doubts about how to proceed. These are (some of) my goals: - the presenter knows nothing about the UI technology (i.e. uses nothing from com.google.*) - the view knows nothing about the model or the presenter - the model knows nothing of the view or the presenter (...obviously) I would place an interface between the view and the presenter and use the Observer pattern to decouple the two: the view generates events and the presenter gets notified. What confuses me is that java.util.Observer and java.util.Observable are not supported in GWT. This suggests that what I'm doing is not the recommended way to do it, as far as GWT is concerned, which leads me to my questions: what is the recommended way to implement MVP using GWT, specifically with the above goals in mind? How would you do it?

    Read the article

  • Is this a well known design pattern? what is it's name

    - by GenEric35
    Hi I have seen this often in code, but when I speak of it i don't know the name of such 'pattern' I have a method with 2 arguments that calls an overloaded method that has 3 arguments and intentionality sets the 3rd one to empty string. public DoWork(string name, string phoneNumber) { CreateContact(name, phoneNumber, string.Empty) } public DoWork(string name, string phoneNumber, string emailAddress) { //do the work } The reason I'm doing this is I to not duplicate code, and allow the existing callers to still call the method that has only 2 parameters. I have associate a few tags to this question, but it probably fit in more categories of questions. Is this a pattern, and does it have a name?

    Read the article

  • Stream classes ... design, pattern for creating views over streams

    - by ToxicAvenger
    A question regarding the design of stream classes - I need a pattern to create independent views over a single stream instance (in my case for reading). A view would be a consecutive part of the stream. The problem I have with the stream classes is that the state (reading or writing) is coupled with the underlying data/storage. So if I need to partition a stream into different segments (whether segments overlap or not doesn't matter), I cannot easily create views over the stream, the views would store start and end position. Because reading from a view - which would translate to reading from the underlying stream adjusted based on the start/end positions - would change the state of the underlying stream instance. So what I could do is take a read on a view instance, adjust the Position of the stream, read the chunks I need. But I cannot do that concurrently. Why is it designed in such a way, and what kind of pattern could I implement to create independet views over a single stream instance which would allow to read/write independently (and concurrently)?

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >