Search Results

Search found 25779 results on 1032 pages for 'custom model binder'.

Page 153/1032 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Custom InputIterator for Boost graph (BGL)

    - by Shadow
    Hi, I have a graph with custom properties to the vertices and edges. I now want to create a copy of this graph, but I don't want the vertices to be as complex as in the original. By this I mean that it would suffice that the vertices have the same indices (vertex_index_t) as they do in the original graph. Instead of doing the copying by hand I wanted to use the copy-functionality of boost::adjacency_list (s. http://www.boost.org/doc/libs/1_37_0/libs/graph/doc/adjacency_list.html): template <class EdgeIterator> adjacency_list(EdgeIterator first, EdgeIterator last, vertices_size_type n, edges_size_type m = 0, const GraphProperty& p = GraphProperty()) The description there says: The EdgeIterator must be a model of InputIterator. The value type of the EdgeIterator must be a std::pair, where the type in the pair is an integer type. The integers will correspond to vertices, and they must all fall in the range of [0, n). Unfortunately I have to admit that I don't quite get it how to define an EdgeIterator that is a model of InputIterator. Here's what I've succeded so far: template< class EdgeIterator, class Edge > class MyEdgeIterator// : public input_iterator< std::pair<int, int> > { public: MyEdgeIterator() {}; MyEdgeIterator(EdgeIterator& rhs) : actual_edge_it_(rhs) {}; MyEdgeIterator(const MyEdgeIterator& to_copy) {}; bool operator==(const MyEdgeIterator& to_compare) { return actual_edge_it_ == to_compare.actual_edge_it_; } bool operator!=(const MyEdgeIterator& to_compare) { return !(*this == to_compare); } Edge operator*() const { return *actual_edge_it_; } const MyEdgeIterator* operator->() const; MyEdgeIterator& operator ++() { ++actual_edge_it_; return *this; } MyEdgeIterator operator ++(int) { MyEdgeIterator<EdgeIterator, Edge> tmp = *this; ++*this; return tmp; } private: EdgeIterator& actual_edge_it_; } However, this doesn't work as it is supposed to and I ran out of clues. So, how do I define the appropriate InputIterator?

    Read the article

  • Rails 2.3 using another model's named_scope or alternative

    - by mustafi
    Hi Let's say I have two models like so: class Comment < ActiveRecord::Base belongs_to :user named_scope :about_x :conditions => "comments.text like '%x%')" end class User < ActiveRecord::Base has_many :comments end I would like to use the models so that I can return all the users and all comments with text like '%x%' all_user_comments_about_x = User.comments.about_x How to proceed? Thank you

    Read the article

  • Acr.ExtDirect &ndash; Part 1 &ndash; Method Resolvers

    - by Allan Ritchie
    One of the most important things of any open source libraries in my opinion is to be as open as possible while avoiding having your library become invasive to your code/business model design.  I personally could never stand marking my business and/or data access code with attributes everywhere.  XML also isn’t really a fav with too many people these days since it comes with a startup performance hit and requires runtime compiling.  I find that there is a whole ton of communication libraries out there currently requiring this (ie. WCF, RIA, etc).  Even though Acr.ExtDirect comes with its own set of attributes, you can piggy-back the [ServiceContract] & [OperationContract] attributes from WCF if you choose.  It goes beyond that though, there are 2 others “out-of-the-box” implementations – Convention based & XML Configuration.    Convention – I don’t actually recommend using this one since it opens up all of your public instance methods to remote execution calls. XML Configuration – This isn’t so bad but requires you enter all of your methods and there operation types into the Castle XML configuration & as I said earlier, XML isn’t the fav these days.   So what are your options if you don’t like attributes, convention, or XML Configuration?  Well, Acr.ExtDirect has its own extension base to give the API a list of methods and components to make available for remote execution.  1: public interface IDirectMethodResolver { 2:   3: bool IsServiceType(ComponentModel model, Type type); 4: string GetNamespace(ComponentModel model); 5: string[] GetDirectMethodNames(ComponentModel model); 6: DirectMethodType GetMethodType(ComponentModel model, MethodInfo method); 7: }   Now to implement our own method resolver:   1: public class TestResolver : IDirectMethodResolver { 2:   3: #region IDirectMethodResolver Members 4:   5: /// <summary> 6: /// Determine if you are calling a service 7: /// </summary> 8: /// <param name="model"></param> 9: /// <param name="type"></param> 10: /// <returns></returns> 11: public bool IsServiceType(ComponentModel model, Type type) { 12: return (type.Namespace == "MyBLL.Data"); 13: } 14:   15: /// <summary> 16: /// Return the calling name for the client side 17: /// </summary> 18: /// <param name="model"></param> 19: /// <returns></returns> 20: public string GetNamespace(ComponentModel model) { 21: return model.Name; 22: } 23:   24: public string[] GetDirectMethodNames(ComponentModel model) { 25: switch (model.Name) { 26: case "Products" : 27: return new [] { 28: "GetProducts", 29: "LoadProduct", 30: "Save", 31: "Update" 32: }; 33:   34: case "Categories" : 35: return new [] { 36: "GetProducts" 37: }; 38:   39: default : 40: throw new ArgumentException("Invalid type"); 41: } 42: } 43:   44: public DirectMethodType GetMethodType(ComponentModel model, MethodInfo method) { 45: if (method.Name.StartsWith("Save") || method.Name.StartsWith("Update")) 46: return DirectMethodType.FormSubmit; 47: 48: else if (method.Name.StartsWith("Load")) 49: return DirectMethodType.FormLoad; 50:   51: else 52: return DirectMethodType.Direct; 53: } 54:   55: #endregion 56: }   And there you have it, your own custom method resolver.  Pretty easy and pretty open ended!

    Read the article

  • WPF Style Override breaks Validation Error event propagation

    - by Ben McMillan
    I have a custom control that overrides Window: public class Window : System.Windows.Window { static Window() { DefaultStyleKeyProperty.OverrideMetadata(typeof(Window), new System.Windows.FrameworkPropertyMetadata(typeof(Window))); } ... } It also has a style: <Style TargetType="{x:Type Controls:Window}" BasedOn="{StaticResource {x:Type Window}}"> <Setter Property="WindowStyle" Value="None" /> <Setter Property="Padding" Value="5" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Controls:Window}"> ... Unfortunately, this breaks the propagation of the Validation.ErrorEvent for my window's contents. That is, my window can receive the event just fine, but I don't know what to do with it to mimic how a standard Window (or whoever) deals with it. If the validating controls are placed in a standard window, they work. They also work if I just take out the OverrideMetadata call (leaving them inside my custom window). Why is this happening, and how can I get the stock functionality for handling these validation error events working again? Thanks!

    Read the article

  • ASP.NET MVC routing issue with Google Chrome client

    - by synergetic
    My Silverlight 4 app is hosted in ASP.NET MVC 2 web application. It works fine when I browse with Internet Explorer 8. However Google Chrome (version 5) cannot find ASP.NET controllers. Specifically, the following ASP.NET controller works both with Chrome and IE. //[OutputCache(NoStore = true, Duration = 0, VaryByParam = "None")] public ContentResult TestMe() { ContentResult result = new ContentResult(); XElement response = new XElement("SvrResponse", new XElement("Data", "my data")); result.Content = response.ToString(); return result; } If I uncomment [OutputCache] attribute then it works with IE but not with Chrome. Also, I use custom model binding with controllers, so if I write the following: public ContentResult TestMe(UserContext userContext) { ... } it also works with IE, but again not with Chrome which gives me error message saying that resource was not found. Of course, I configured IIS 6 for handling all requests via aspnet_isapi.dll and I have registered custom model binder in my web app's Global.asax inside Application_Start() method. Can someone explain me what might be the cause? Thank you.

    Read the article

  • Customizing the TFS 2008 build sequence to avoid compilation and deploy SSRS

    - by Andrew
    I'm trying to create a CI process for SQL Server Reporting Services. I am fairly new to TFS but quite experienced with MSBuild. In the past I've used a combination of MSBuild with Team City so the whole build process is more or less custom. Here lies the start of my problems, as the solution I am deploying only contains Report Server projects (rds), no compilation is required. I thought that I would override the the first default task that TFS runs (EndToEndIteration) to override the default TFS build sequence and inject my own. The first snag that I have come across is that the build always fails, how can I set the status of the build to success? Currently the EndToEndIteration task is very light and only has a message. Is this the best method to create a custom build process in TFS where compilation is not required? Or should I use the default sequence and override one of the hook tasks mentioned in http://msdn.microsoft.com/en-us/library/aa337604%28VS.80%29.aspx (ie: AfterCompile) The core steps that I'd like to achieve are: Bundle the RDL and datasource files Connect to the host server to register/deploy the reports Re-apply any subscriptions that previously existed Run tests to verify the deployment succeeded and is returning results as expected I have found another article on Report services deployment: http://stackoverflow.com/questions/88710/reporting-services-deployment But it doesn't mention the best practice for customizing the standard build process. Any help would be appreciated.

    Read the article

  • Rails validation count limit on has_many :through

    - by Jeremy
    I've got the following models: Team, Member, Assignment, Role The Team model has_many Members. Each Member has_many roles through assignments. Role assignments are Captain and Runner. I have also installed devise and CanCan using the Member model. What I need to do is limit each Team to have a max of 1 captain and 5 runners. I found this example, and it seemed to work after some customization, but on update ('teams/1/members/4/edit'). It doesn't work on create ('teams/1/members/new'). But my other validation (validates :role_ids, :presence = true ) does work on both update and create. Any help would be appreciated. Update: I've found this example that would seem to be similar to my problem but I can't seem to make it work for my app. It seems that the root of the problem lies with how the count (or size) is performed before and during validation. For Example: When updating a record... It checks to see how many runners there are on a team and returns a count. (i.e. 5) Then when I select a role(s) to add to the member it takes the known count from the database (i.e. 5) and adds the proposed changes (i.e. 1), and then runs the validation check. (Team.find(self.team_id).members.runner.count 5) This works fine because it returns a value of 6 and 6 5 so the proposed update fails without saving and an error is given. But when I try to create a new member on the team... It checks to see how many runners there are on a team and returns a count. (i.e. 5) Then when I select a role(s) to add to the member it takes the known count from the database (i.e. 5) and then runs the validation check WITHOUT factoring in the proposed changes. This doesn't work because it returns a value of 5 known runner and 5 = 5 so the proposed update passes and the new member and role is saved to the database with no error. Member Model: class Member < ActiveRecord::Base devise :database_authenticatable, :registerable, :recoverable, :rememberable, :trackable, :validatable attr_accessible :password, :password_confirmation, :remember_me attr_accessible :age, :email, :first_name, :last_name, :sex, :shirt_size, :team_id, :assignments_attributes, :role_ids belongs_to :team has_many :assignments, :dependent => :destroy has_many :roles, through: :assignments accepts_nested_attributes_for :assignments scope :runner, joins(:roles).where('roles.title = ?', "Runner") scope :captain, joins(:roles).where('roles.title = ?', "Captain") validate :validate_runner_count validate :validate_captain_count validates :role_ids, :presence => true def validate_runner_count if Team.find(self.team_id).members.runner.count > 5 errors.add(:role_id, 'Error - Max runner limit reached') end end def validate_captain_count if Team.find(self.team_id).members.captain.count > 1 errors.add(:role_id, 'Error - Max captain limit reached') end end def has_role?(role_sym) roles.any? { |r| r.title.underscore.to_sym == role_sym } end end Member Controller: class MembersController < ApplicationController load_and_authorize_resource :team load_and_authorize_resource :member, :through => :team before_filter :get_team before_filter :initialize_check_boxes, :only => [:create, :update] def get_team @team = Team.find(params[:team_id]) end def index respond_to do |format| format.html # index.html.erb format.json { render json: @members } end end def show respond_to do |format| format.html # show.html.erb format.json { render json: @member } end end def new respond_to do |format| format.html # new.html.erb format.json { render json: @member } end end def edit end def create respond_to do |format| if @member.save format.html { redirect_to [@team, @member], notice: 'Member was successfully created.' } format.json { render json: [@team, @member], status: :created, location: [@team, @member] } else format.html { render action: "new" } format.json { render json: @member.errors, status: :unprocessable_entity } end end end def update respond_to do |format| if @member.update_attributes(params[:member]) format.html { redirect_to [@team, @member], notice: 'Member was successfully updated.' } format.json { head :no_content } else format.html { render action: "edit" } format.json { render json: @member.errors, status: :unprocessable_entity } end end end def destroy @member.destroy respond_to do |format| format.html { redirect_to team_members_url } format.json { head :no_content } end end # Allow empty checkboxes # http://railscasts.com/episodes/17-habtm-checkboxes def initialize_check_boxes params[:member][:role_ids] ||= [] end end _Form Partial <%= form_for [@team, @member], :html => { :class => 'form-horizontal' } do |f| %> #... # testing the count... <ul> <li>Captain - <%= Team.find(@member.team_id).members.captain.size %></li> <li>Runner - <%= Team.find(@member.team_id).members.runner.size %></li> <li>Driver - <%= Team.find(@member.team_id).members.driver.size %></li> </ul> <div class="control-group"> <div class="controls"> <%= f.fields_for :roles do %> <%= hidden_field_tag "member[role_ids][]", nil %> <% Role.all.each do |role| %> <%= check_box_tag "member[role_ids][]", role.id, @member.role_ids.include?(role.id), id: dom_id(role) %> <%= label_tag dom_id(role), role.title %> <% end %> <% end %> </div> </div> #... <% end %>

    Read the article

  • "database already closed" is shown using a custom cursor adapter

    - by kiduxa
    I'm using a cursor with a custom adapter that extends SimpleCursorAdapter: public class ListWordAdapter extends SimpleCursorAdapter { private LayoutInflater inflater; private Cursor mCursor; private int mLayout; private String[] from; private int[] to; public ListWordAdapter(Context context, int layout, Cursor c, String[] from, int[] to, int flags) { super(context, layout, c, from, to, flags); this.mCursor = c; this.inflater = LayoutInflater.from(context); this.mLayout = layout; this.from = from; this.to = to; } private static class ViewHolder { //public ImageView img; public TextView name; public TextView type; public TextView translate; } @Override public View getView(int position, View convertView, ViewGroup parent) { if (mCursor.moveToPosition(position)) { ViewHolder holder; if (convertView == null) { convertView = inflater.inflate(mLayout, null); holder = new ViewHolder(); // holder.img = (ImageView) convertView.findViewById(R.id.img_row); holder.name = (TextView) convertView.findViewById(to[0]); holder.type = (TextView) convertView.findViewById(to[1]); holder.translate = (TextView) convertView.findViewById(to[2]); convertView.setTag(holder); } else { holder = (ViewHolder) convertView.getTag(); } holder.name.setText(mCursor.getString(mCursor.getColumnIndex(from[0]))); holder.type.setText(mCursor.getString(mCursor.getColumnIndex(from[1]))); holder.translate.setText(mCursor.getString(mCursor.getColumnIndex(from[2]))); // holder.img.setImageResource(img_resource); } return convertView; } } And in the main activity I call it as: adapter = new ListWordAdapter(getSherlockActivity(), R.layout.row_list_words, mCursorWords, from, to, 0); When a modification in the list is made, I call this method: public void onWordSaved() { WordDAO wordsDao = new WordSqliteDAO(); Cursor mCursorWords = wordsDao.list(getSherlockActivity()); adapter.changeCursor(mCursorWords); } The thing here is that this produces me this exception: 10-29 11:14:33.810: E/AndroidRuntime(18659): java.lang.IllegalStateException: database /data/data/com.example.palabrasdeldia/databases/palabrasDelDia (conn# 0) already closed Complete stack trace: 10-29 11:14:33.810: E/AndroidRuntime(18659): FATAL EXCEPTION: main 10-29 11:14:33.810: E/AndroidRuntime(18659): java.lang.IllegalStateException: database /data/data/com.example.palabrasdeldia/databases/palabrasDelDia (conn# 0) already closed 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.database.sqlite.SQLiteDatabase.verifyDbIsOpen(SQLiteDatabase.java:2123) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.database.sqlite.SQLiteDatabase.lock(SQLiteDatabase.java:398) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.database.sqlite.SQLiteDatabase.lock(SQLiteDatabase.java:390) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.database.sqlite.SQLiteQuery.fillWindow(SQLiteQuery.java:74) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.database.sqlite.SQLiteCursor.fillWindow(SQLiteCursor.java:311) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.database.sqlite.SQLiteCursor.onMove(SQLiteCursor.java:283) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.database.AbstractCursor.moveToPosition(AbstractCursor.java:173) 10-29 11:14:33.810: E/AndroidRuntime(18659): at com.example.palabrasdeldia.adapters.ListWordAdapter.getView(ListWordAdapter.java:42) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.AbsListView.obtainView(AbsListView.java:2128) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.ListView.makeAndAddView(ListView.java:1817) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.ListView.fillSpecific(ListView.java:1361) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.ListView.layoutChildren(ListView.java:1646) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.AbsListView.onLayout(AbsListView.java:1979) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1542) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.layoutHorizontal(LinearLayout.java:1527) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.onLayout(LinearLayout.java:1316) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.FrameLayout.onLayout(FrameLayout.java:400) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.support.v4.view.ViewPager.onLayout(ViewPager.java:1589) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.FrameLayout.onLayout(FrameLayout.java:400) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1542) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:1403) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.onLayout(LinearLayout.java:1314) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.FrameLayout.onLayout(FrameLayout.java:400) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1542) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:1403) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.onLayout(LinearLayout.java:1314) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.FrameLayout.onLayout(FrameLayout.java:400) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.FrameLayout.onLayout(FrameLayout.java:400) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewRoot.performTraversals(ViewRoot.java:1253) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewRoot.handleMessage(ViewRoot.java:2017) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.os.Handler.dispatchMessage(Handler.java:99) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.os.Looper.loop(Looper.java:132) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.app.ActivityThread.main(ActivityThread.java:4028) 10-29 11:14:33.810: E/AndroidRuntime(18659): at java.lang.reflect.Method.invokeNative(Native Method) 10-29 11:14:33.810: E/AndroidRuntime(18659): at java.lang.reflect.Method.invoke(Method.java:491) 10-29 11:14:33.810: E/AndroidRuntime(18659): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:844) 10-29 11:14:33.810: E/AndroidRuntime(18659): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:602) 10-29 11:14:33.810: E/AndroidRuntime(18659): at dalvik.system.NativeStart.main(Native Method) If I use SimpleCursorAdapter directly instead of ListWordAdapter, it works fine. What's wrong with my custom adapter implementation? The line in bold in the stack trace corresponds with: if (mCursor.moveToPosition(position)) inside getView method. EDIT: I have created a custom class to manage DB operations as open and close: public class ConexionBD { private Context context; private SQLiteDatabase database; private DataBaseHelper dbHelper; public ConexionBD(Context context) { this.context = context; } public ConexionBD open() throws SQLException { this.dbHelper = DataBaseHelper.getInstance(context); this.database = dbHelper.getWritableDatabase(); database.execSQL("PRAGMA foreign_keys=ON"); return this; } public void close() { if (database.isOpen() && database != null) { dbHelper.close(); } } /*Getters y setters*/ public SQLiteDatabase getDatabase() { return database; } public void setDatabase(SQLiteDatabase database) { this.database = database; } } And this is my DataBaseHelper: public class DataBaseHelper extends SQLiteOpenHelper { private static final String DATABASE_NAME = "myDb"; private static final int DATABASE_VERSION = 1; private static DataBaseHelper sInstance = null; public static DataBaseHelper getInstance(Context context) { // Use the application context, which will ensure that you // don't accidentally leak an Activity's context. // See this article for more information: http://bit.ly/6LRzfx if (sInstance == null) { sInstance = new DataBaseHelper(context.getApplicationContext()); } return sInstance; } @Override public void onCreate(SQLiteDatabase database) { ... } .... And this is an example of how I manage a query: public Cursor list(Context context) { ConexionBD conexion = new ConexionBD(context); Cursor mCursor = null; try{ conexion.open(); mCursor = conexion.getDatabase().query(DataBaseHelper.TABLE_WORD , null , null, null, null, null, Word.NAME); if (mCursor != null) { mCursor.moveToFirst(); } }finally{ conexion.close(); } return mCursor; } For every connection to the DB I open it and close it.

    Read the article

  • Creating a dynamic, extensible C# Expando Object

    - by Rick Strahl
    I love dynamic functionality in a strongly typed language because it offers us the best of both worlds. In C# (or any of the main .NET languages) we now have the dynamic type that provides a host of dynamic features for the static C# language. One place where I've found dynamic to be incredibly useful is in building extensible types or types that expose traditionally non-object data (like dictionaries) in easier to use and more readable syntax. I wrote about a couple of these for accessing old school ADO.NET DataRows and DataReaders more easily for example. These classes are dynamic wrappers that provide easier syntax and auto-type conversions which greatly simplifies code clutter and increases clarity in existing code. ExpandoObject in .NET 4.0 Another great use case for dynamic objects is the ability to create extensible objects - objects that start out with a set of static members and then can add additional properties and even methods dynamically. The .NET 4.0 framework actually includes an ExpandoObject class which provides a very dynamic object that allows you to add properties and methods on the fly and then access them again. For example with ExpandoObject you can do stuff like this:dynamic expand = new ExpandoObject(); expand.Name = "Rick"; expand.HelloWorld = (Func<string, string>) ((string name) => { return "Hello " + name; }); Console.WriteLine(expand.Name); Console.WriteLine(expand.HelloWorld("Dufus")); Internally ExpandoObject uses a Dictionary like structure and interface to store properties and methods and then allows you to add and access properties and methods easily. As cool as ExpandoObject is it has a few shortcomings too: It's a sealed type so you can't use it as a base class It only works off 'properties' in the internal Dictionary - you can't expose existing type data It doesn't serialize to XML or with DataContractSerializer/DataContractJsonSerializer Expando - A truly extensible Object ExpandoObject is nice if you just need a dynamic container for a dictionary like structure. However, if you want to build an extensible object that starts out with a set of strongly typed properties and then allows you to extend it, ExpandoObject does not work because it's a sealed class that can't be inherited. I started thinking about this very scenario for one of my applications I'm building for a customer. In this system we are connecting to various different user stores. Each user store has the same basic requirements for username, password, name etc. But then each store also has a number of extended properties that is available to each application. In the real world scenario the data is loaded from the database in a data reader and the known properties are assigned from the known fields in the database. All unknown fields are then 'added' to the expando object dynamically. In the past I've done this very thing with a separate property - Properties - just like I do for this class. But the property and dictionary syntax is not ideal and tedious to work with. I started thinking about how to represent these extra property structures. One way certainly would be to add a Dictionary, or an ExpandoObject to hold all those extra properties. But wouldn't it be nice if the application could actually extend an existing object that looks something like this as you can with the Expando object:public class User : Westwind.Utilities.Dynamic.Expando { public string Email { get; set; } public string Password { get; set; } public string Name { get; set; } public bool Active { get; set; } public DateTime? ExpiresOn { get; set; } } and then simply start extending the properties of this object dynamically? Using the Expando object I describe later you can now do the following:[TestMethod] public void UserExampleTest() { var user = new User(); // Set strongly typed properties user.Email = "[email protected]"; user.Password = "nonya123"; user.Name = "Rickochet"; user.Active = true; // Now add dynamic properties dynamic duser = user; duser.Entered = DateTime.Now; duser.Accesses = 1; // you can also add dynamic props via indexer user["NickName"] = "AntiSocialX"; duser["WebSite"] = "http://www.west-wind.com/weblog"; // Access strong type through dynamic ref Assert.AreEqual(user.Name,duser.Name); // Access strong type through indexer Assert.AreEqual(user.Password,user["Password"]); // access dyanmically added value through indexer Assert.AreEqual(duser.Entered,user["Entered"]); // access index added value through dynamic Assert.AreEqual(user["NickName"],duser.NickName); // loop through all properties dynamic AND strong type properties (true) foreach (var prop in user.GetProperties(true)) { object val = prop.Value; if (val == null) val = "null"; Console.WriteLine(prop.Key + ": " + val.ToString()); } } As you can see this code somewhat blurs the line between a static and dynamic type. You start with a strongly typed object that has a fixed set of properties. You can then cast the object to dynamic (as I discussed in my last post) and add additional properties to the object. You can also use an indexer to add dynamic properties to the object. To access the strongly typed properties you can use either the strongly typed instance, the indexer or the dynamic cast of the object. Personally I think it's kinda cool to have an easy way to access strongly typed properties by string which can make some data scenarios much easier. To access the 'dynamically added' properties you can use either the indexer on the strongly typed object, or property syntax on the dynamic cast. Using the dynamic type allows all three modes to work on both strongly typed and dynamic properties. Finally you can iterate over all properties, both dynamic and strongly typed if you chose. Lots of flexibility. Note also that by default the Expando object works against the (this) instance meaning it extends the current object. You can also pass in a separate instance to the constructor in which case that object will be used to iterate over to find properties rather than this. Using this approach provides some really interesting functionality when use the dynamic type. To use this we have to add an explicit constructor to the Expando subclass:public class User : Westwind.Utilities.Dynamic.Expando { public string Email { get; set; } public string Password { get; set; } public string Name { get; set; } public bool Active { get; set; } public DateTime? ExpiresOn { get; set; } public User() : base() { } // only required if you want to mix in seperate instance public User(object instance) : base(instance) { } } to allow the instance to be passed. When you do you can now do:[TestMethod] public void ExpandoMixinTest() { // have Expando work on Addresses var user = new User( new Address() ); // cast to dynamicAccessToPropertyTest dynamic duser = user; // Set strongly typed properties duser.Email = "[email protected]"; user.Password = "nonya123"; // Set properties on address object duser.Address = "32 Kaiea"; //duser.Phone = "808-123-2131"; // set dynamic properties duser.NonExistantProperty = "This works too"; // shows default value Address.Phone value Console.WriteLine(duser.Phone); } Using the dynamic cast in this case allows you to access *three* different 'objects': The strong type properties, the dynamically added properties in the dictionary and the properties of the instance passed in! Effectively this gives you a way to simulate multiple inheritance (which is scary - so be very careful with this, but you can do it). How Expando works Behind the scenes Expando is a DynamicObject subclass as I discussed in my last post. By implementing a few of DynamicObject's methods you can basically create a type that can trap 'property missing' and 'method missing' operations. When you access a non-existant property a known method is fired that our code can intercept and provide a value for. Internally Expando uses a custom dictionary implementation to hold the dynamic properties you might add to your expandable object. Let's look at code first. The code for the Expando type is straight forward and given what it provides relatively short. Here it is.using System; using System.Collections.Generic; using System.Linq; using System.Dynamic; using System.Reflection; namespace Westwind.Utilities.Dynamic { /// <summary> /// Class that provides extensible properties and methods. This /// dynamic object stores 'extra' properties in a dictionary or /// checks the actual properties of the instance. /// /// This means you can subclass this expando and retrieve either /// native properties or properties from values in the dictionary. /// /// This type allows you three ways to access its properties: /// /// Directly: any explicitly declared properties are accessible /// Dynamic: dynamic cast allows access to dictionary and native properties/methods /// Dictionary: Any of the extended properties are accessible via IDictionary interface /// </summary> [Serializable] public class Expando : DynamicObject, IDynamicMetaObjectProvider { /// <summary> /// Instance of object passed in /// </summary> object Instance; /// <summary> /// Cached type of the instance /// </summary> Type InstanceType; PropertyInfo[] InstancePropertyInfo { get { if (_InstancePropertyInfo == null && Instance != null) _InstancePropertyInfo = Instance.GetType().GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.DeclaredOnly); return _InstancePropertyInfo; } } PropertyInfo[] _InstancePropertyInfo; /// <summary> /// String Dictionary that contains the extra dynamic values /// stored on this object/instance /// </summary> /// <remarks>Using PropertyBag to support XML Serialization of the dictionary</remarks> public PropertyBag Properties = new PropertyBag(); //public Dictionary<string,object> Properties = new Dictionary<string, object>(); /// <summary> /// This constructor just works off the internal dictionary and any /// public properties of this object. /// /// Note you can subclass Expando. /// </summary> public Expando() { Initialize(this); } /// <summary> /// Allows passing in an existing instance variable to 'extend'. /// </summary> /// <remarks> /// You can pass in null here if you don't want to /// check native properties and only check the Dictionary! /// </remarks> /// <param name="instance"></param> public Expando(object instance) { Initialize(instance); } protected virtual void Initialize(object instance) { Instance = instance; if (instance != null) InstanceType = instance.GetType(); } /// <summary> /// Try to retrieve a member by name first from instance properties /// followed by the collection entries. /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; // first check the Properties collection for member if (Properties.Keys.Contains(binder.Name)) { result = Properties[binder.Name]; return true; } // Next check for Public properties via Reflection if (Instance != null) { try { return GetProperty(Instance, binder.Name, out result); } catch { } } // failed to retrieve a property result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { // first check to see if there's a native property to set if (Instance != null) { try { bool result = SetProperty(Instance, binder.Name, value); if (result) return true; } catch { } } // no match - set or add to dictionary Properties[binder.Name] = value; return true; } /// <summary> /// Dynamic invocation method. Currently allows only for Reflection based /// operation (no ability to add methods dynamically). /// </summary> /// <param name="binder"></param> /// <param name="args"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { if (Instance != null) { try { // check instance passed in for methods to invoke if (InvokeMethod(Instance, binder.Name, args, out result)) return true; } catch { } } result = null; return false; } /// <summary> /// Reflection Helper method to retrieve a property /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="result"></param> /// <returns></returns> protected bool GetProperty(object instance, string name, out object result) { if (instance == null) instance = this; var miArray = InstanceType.GetMember(name, BindingFlags.Public | BindingFlags.GetProperty | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0]; if (mi.MemberType == MemberTypes.Property) { result = ((PropertyInfo)mi).GetValue(instance,null); return true; } } result = null; return false; } /// <summary> /// Reflection helper method to set a property value /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="value"></param> /// <returns></returns> protected bool SetProperty(object instance, string name, object value) { if (instance == null) instance = this; var miArray = InstanceType.GetMember(name, BindingFlags.Public | BindingFlags.SetProperty | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0]; if (mi.MemberType == MemberTypes.Property) { ((PropertyInfo)mi).SetValue(Instance, value, null); return true; } } return false; } /// <summary> /// Reflection helper method to invoke a method /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="args"></param> /// <param name="result"></param> /// <returns></returns> protected bool InvokeMethod(object instance, string name, object[] args, out object result) { if (instance == null) instance = this; // Look at the instanceType var miArray = InstanceType.GetMember(name, BindingFlags.InvokeMethod | BindingFlags.Public | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0] as MethodInfo; result = mi.Invoke(Instance, args); return true; } result = null; return false; } /// <summary> /// Convenience method that provides a string Indexer /// to the Properties collection AND the strongly typed /// properties of the object by name. /// /// // dynamic /// exp["Address"] = "112 nowhere lane"; /// // strong /// var name = exp["StronglyTypedProperty"] as string; /// </summary> /// <remarks> /// The getter checks the Properties dictionary first /// then looks in PropertyInfo for properties. /// The setter checks the instance properties before /// checking the Properties dictionary. /// </remarks> /// <param name="key"></param> /// /// <returns></returns> public object this[string key] { get { try { // try to get from properties collection first return Properties[key]; } catch (KeyNotFoundException ex) { // try reflection on instanceType object result = null; if (GetProperty(Instance, key, out result)) return result; // nope doesn't exist throw; } } set { if (Properties.ContainsKey(key)) { Properties[key] = value; return; } // check instance for existance of type first var miArray = InstanceType.GetMember(key, BindingFlags.Public | BindingFlags.GetProperty); if (miArray != null && miArray.Length > 0) SetProperty(Instance, key, value); else Properties[key] = value; } } /// <summary> /// Returns and the properties of /// </summary> /// <param name="includeProperties"></param> /// <returns></returns> public IEnumerable<KeyValuePair<string,object>> GetProperties(bool includeInstanceProperties = false) { if (includeInstanceProperties && Instance != null) { foreach (var prop in this.InstancePropertyInfo) yield return new KeyValuePair<string, object>(prop.Name, prop.GetValue(Instance, null)); } foreach (var key in this.Properties.Keys) yield return new KeyValuePair<string, object>(key, this.Properties[key]); } /// <summary> /// Checks whether a property exists in the Property collection /// or as a property on the instance /// </summary> /// <param name="item"></param> /// <returns></returns> public bool Contains(KeyValuePair<string, object> item, bool includeInstanceProperties = false) { bool res = Properties.ContainsKey(item.Key); if (res) return true; if (includeInstanceProperties && Instance != null) { foreach (var prop in this.InstancePropertyInfo) { if (prop.Name == item.Key) return true; } } return false; } } } Although the Expando class supports an indexer, it doesn't actually implement IDictionary or even IEnumerable. It only provides the indexer and Contains() and GetProperties() methods, that work against the Properties dictionary AND the internal instance. The reason for not implementing IDictionary is that a) it doesn't add much value since you can access the Properties dictionary directly and that b) I wanted to keep the interface to class very lean so that it can serve as an entity type if desired. Implementing these IDictionary (or even IEnumerable) causes LINQ extension methods to pop up on the type which obscures the property interface and would only confuse the purpose of the type. IDictionary and IEnumerable are also problematic for XML and JSON Serialization - the XML Serializer doesn't serialize IDictionary<string,object>, nor does the DataContractSerializer. The JavaScriptSerializer does serialize, but it treats the entire object like a dictionary and doesn't serialize the strongly typed properties of the type, only the dictionary values which is also not desirable. Hence the decision to stick with only implementing the indexer to support the user["CustomProperty"] functionality and leaving iteration functions to the publicly exposed Properties dictionary. Note that the Dictionary used here is a custom PropertyBag class I created to allow for serialization to work. One important aspect for my apps is that whatever custom properties get added they have to be accessible to AJAX clients since the particular app I'm working on is a SIngle Page Web app where most of the Web access is through JSON AJAX calls. PropertyBag can serialize to XML and one way serialize to JSON using the JavaScript serializer (not the DCS serializers though). The key components that make Expando work in this code are the Properties Dictionary and the TryGetMember() and TrySetMember() methods. The Properties collection is public so if you choose you can explicitly access the collection to get better performance or to manipulate the members in internal code (like loading up dynamic values form a database). Notice that TryGetMember() and TrySetMember() both work against the dictionary AND the internal instance to retrieve and set properties. This means that user["Name"] works against native properties of the object as does user["Name"] = "RogaDugDog". What's your Use Case? This is still an early prototype but I've plugged it into one of my customer's applications and so far it's working very well. The key features for me were the ability to easily extend the type with values coming from a database and exposing those values in a nice and easy to use manner. I'm also finding that using this type of object for ViewModels works very well to add custom properties to view models. I suspect there will be lots of uses for this - I've been using the extra dictionary approach to extensibility for years - using a dynamic type to make the syntax cleaner is just a bonus here. What can you think of to use this for? Resources Source Code and Tests (GitHub) Also integrated in Westwind.Utilities of the West Wind Web Toolkit West Wind Utilities NuGet© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET  Dynamic Types   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • Sharing data between graphics and physics engine in the game?

    - by PolGraphic
    I'm writing the game engine that consists of few modules. Two of them are the graphics engine and the physics engine. I wonder if it's a good solution to share data between them? Two ways (sharing or not) looks like that: Without sharing data GraphicsModel{ //some common for graphics and physics data like position //some only graphic data //like textures and detailed model's verticles that physics doesn't need }; PhysicsModel{ //some common for graphics and physics data like position //some only physics data //usually my physics data contains A LOT more informations than graphics data } engine3D->createModel3D(...); physicsEngine->createModel3D(...); //connect graphics and physics data //e.g. update graphics model's position when physics model's position will change I see two main problems: A lot of redundant data (like two positions for both physics and graphics data) Problem with updating data (I have to manually update graphics data when physics data changes) With sharing data Model{ //some common for graphics and physics data like position }; GraphicModel : public Model{ //some only graphics data //like textures and detailed model's verticles that physics doesn't need }; PhysicsModel : public Model{ //some only physics data //usually my physics data contains A LOT more informations than graphics data } model = engine3D->createModel3D(...); physicsEngine->assingModel3D(&model); //will cast to //PhysicsModel for it's purposes?? //when physics changes anything (like position) in model //(which it treats like PhysicsModel), the position for graphics data //will change as well (because it's the same model) Problems here: physicsEngine cannot create new objects, just "assing" existing ones from engine3D (somehow it looks more anti-independent for me) Casting data in assingModel3D function physicsEngine and graphicsEngine must be careful - they cannot delete data when they don't need them (because second one may need it). But it's rare situation. Moreover, they can just delete the pointer, not the object. Or we can assume that graphicsEngine will delete objects, physicsEngine just pointers to them. Which way is better? Which will produce more problems in the future? I like the second solution more, but I wonder why most graphics and physics engines prefer the first one (maybe because they normally make only graphics or only physics engine and somebody else connect them in the game?). Have they any more hidden pros & contras?

    Read the article

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

  • Why do meshes show up as bones in the Model class?

    - by Itamar Marom
    Right now I'm working on a 3D game and I've come across something very weird. When I created the model in Blender, I added an armature named "MyBone" to the stage and attached a cube ("MyCube") to it, so that when I move the armature, the cube moves with it. I exported this as an FBX and loaded it as a Model object. What I expected to see was: But what I got was this: I'm really confused. Why is the mesh I created showing up in the bone list? And what's Root Node? Here are the .blend and .fbx files: here or here. Thanks.

    Read the article

  • File / Application association using a custom command is gone?

    - by Christian Vielma
    In previous Ubuntus when you want to select/change an application to open a specific file (right-click/open with other application or properties) you were able to write a custom command to open the file. This was very useful, but now in 11.10 I can't find this option, it only shows me a list of applications and a button to look for applications in Internet. Is there a way to restore the command line to write custom commands to open files?

    Read the article

  • Inside the DLR – Invoking methods

    - by Simon Cooper
    So, we’ve looked at how a dynamic call is represented in a compiled assembly, and how the dynamic lookup is performed at runtime. The last piece of the puzzle is how the resolved method gets invoked, and that is the subject of this post. Invoking methods As discussed in my previous posts, doing a full lookup and bind at runtime each and every single time the callsite gets invoked would be far too slow to be usable. The results obtained from the callsite binder must to be cached, along with a series of conditions to determine whether the cached result can be reused. So, firstly, how are the conditions represented? These conditions can be anything; they are determined entirely by the semantics of the language the binder is representing. The binder has to be able to return arbitary code that is then executed to determine whether the conditions apply or not. Fortunately, .NET 4 has a neat way of representing arbitary code that can be easily combined with other code – expression trees. All the callsite binder has to return is an expression (called a ‘restriction’) that evaluates to a boolean, returning true when the restriction passes (indicating the corresponding method invocation can be used) and false when it does’t. If the bind result is also represented in an expression tree, these can be combined easily like so: if ([restriction is true]) { [invoke cached method] } Take my example from my previous post: public class ClassA { public static void TestDynamic() { CallDynamic(new ClassA(), 10); CallDynamic(new ClassA(), "foo"); } public static void CallDynamic(dynamic d, object o) { d.Method(o); } public void Method(int i) {} public void Method(string s) {} } When the Method(int) method is first bound, along with an expression representing the result of the bind lookup, the C# binder will return the restrictions under which that bind can be reused. In this case, it can be reused if the types of the parameters are the same: if (thisArg.GetType() == typeof(ClassA) && arg1.GetType() == typeof(int)) { thisClassA.Method(i); } Caching callsite results So, now, it’s up to the callsite to link these expressions returned from the binder together in such a way that it can determine which one from the many it has cached it should use. This caching logic is all located in the System.Dynamic.UpdateDelegates class. It’ll help if you’ve got this type open in a decompiler to have a look yourself. For each callsite, there are 3 layers of caching involved: The last method invoked on the callsite. All methods that have ever been invoked on the callsite. All methods that have ever been invoked on any callsite of the same type. We’ll cover each of these layers in order Level 1 cache: the last method called on the callsite When a CallSite<T> object is first instantiated, the Target delegate field (containing the delegate that is called when the callsite is invoked) is set to one of the UpdateAndExecute generic methods in UpdateDelegates, corresponding to the number of parameters to the callsite, and the existance of any return value. These methods contain most of the caching, invoke, and binding logic for the callsite. The first time this method is invoked, the UpdateAndExecute method finds there aren’t any entries in the caches to reuse, and invokes the binder to resolve a new method. Once the callsite has the result from the binder, along with any restrictions, it stitches some extra expressions in, and replaces the Target field in the callsite with a compiled expression tree similar to this (in this example I’m assuming there’s no return value): if ([restriction is true]) { [invoke cached method] return; } if (callSite._match) { _match = false; return; } else { UpdateAndExecute(callSite, arg0, arg1, ...); } Woah. What’s going on here? Well, this resulting expression tree is actually the first level of caching. The Target field in the callsite, which contains the delegate to call when the callsite is invoked, is set to the above code compiled from the expression tree into IL, and then into native code by the JIT. This code checks whether the restrictions of the last method that was invoked on the callsite (the ‘primary’ method) match, and if so, executes that method straight away. This means that, the next time the callsite is invoked, the first code that executes is the restriction check, executing as native code! This makes this restriction check on the primary cached delegate very fast. But what if the restrictions don’t match? In that case, the second part of the stitched expression tree is executed. What this section should be doing is calling back into the UpdateAndExecute method again to resolve a new method. But it’s slightly more complicated than that. To understand why, we need to understand the second and third level caches. Level 2 cache: all methods that have ever been invoked on the callsite When a binder has returned the result of a lookup, as well as updating the Target field with a compiled expression tree, stitched together as above, the callsite puts the same compiled expression tree in an internal list of delegates, called the rules list. This list acts as the level 2 cache. Why use the same delegate? Stitching together expression trees is an expensive operation. You don’t want to do it every time the callsite is invoked. Ideally, you would create one expression tree from the binder’s result, compile it, and then use the resulting delegate everywhere in the callsite. But, if the same delegate is used to invoke the callsite in the first place, and in the caches, that means each delegate needs two modes of operation. An ‘invoke’ mode, for when the delegate is set as the value of the Target field, and a ‘match’ mode, used when UpdateAndExecute is searching for a method in the callsite’s cache. Only in the invoke mode would the delegate call back into UpdateAndExecute. In match mode, it would simply return without doing anything. This mode is controlled by the _match field in CallSite<T>. The first time the callsite is invoked, _match is false, and so the Target delegate is called in invoke mode. Then, if the initial restriction check fails, the Target delegate calls back into UpdateAndExecute. This method sets _match to true, then calls all the cached delegates in the rules list in match mode to try and find one that passes its restrictions, and invokes it. However, there needs to be some way for each cached delegate to inform UpdateAndExecute whether it passed its restrictions or not. To do this, as you can see above, it simply re-uses _match, and sets it to false if it did not pass the restrictions. This allows the code within each UpdateAndExecute method to check for cache matches like so: foreach (T cachedDelegate in Rules) { callSite._match = true; cachedDelegate(); // sets _match to false if restrictions do not pass if (callSite._match) { // passed restrictions, and the cached method was invoked // set this delegate as the primary target to invoke next time callSite.Target = cachedDelegate; return; } // no luck, try the next one... } Level 3 cache: all methods that have ever been invoked on any callsite with the same signature The reason for this cache should be clear – if a method has been invoked through a callsite in one place, then it is likely to be invoked on other callsites in the codebase with the same signature. Rather than living in the callsite, the ‘global’ cache for callsite delegates lives in the CallSiteBinder class, in the Cache field. This is a dictionary, typed on the callsite delegate signature, providing a RuleCache<T> instance for each delegate signature. This is accessed in the same way as the level 2 callsite cache, by the UpdateAndExecute methods. When a method is matched in the global cache, it is copied into the callsite and Target cache before being executed. Putting it all together So, how does this all fit together? Like so (I’ve omitted some implementation & performance details): That, in essence, is how the DLR performs its dynamic calls nearly as fast as statically compiled IL code. Extensive use of expression trees, compiled to IL and then into native code. Multiple levels of caching, the first of which executes immediately when the dynamic callsite is invoked. And a clever re-use of compiled expression trees that can be used in completely different contexts without being recompiled. All in all, a very fast and very clever reflection caching mechanism.

    Read the article

  • How to import or "using" a custom class in Unity script?

    - by Bobbake4
    I have downloaded the JSONObject plugin for parsing JSON in Unity but when I use it in a script I get an error indicating JSONObject cannot be found. My question is how do I use a custom object class defined inside another class. I know I need a using directive to solve this but I am not sure of the path to these custom objects I have imported. They are in the root project folder inside JSONObject folder and class is called JSONObject. Thanks

    Read the article

  • Let a model instance choose appropriate view class using category. Is it good design?

    - by Denis Mikhaylov
    Assume I have abstract base model class called MoneySource. And two realizations BankCard and CellularAccount. In MoneysSourceListViewController I want to display a list of them, but with ListItemView different for each MoneySource subclass. What if I define a category on MoneySource @interface MoneySource (ListItemView) - (Class)listItemViewClass; @end And then override it for each concrete sublcass of MoneySource, returning suitable view class. @implementation CellularAccount (ListItemView) - (Class)listItemViewClass { return [BankCardListView class]; } @end @implementation BankCard (ListItemView) - (Class)listItemViewClass { return [CellularAccountListView class]; } @end so I can ask model object about its view, not violating MVC principles, and avoiding class introspection or if constructions. Thank you!

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >