Search Results

Search found 54118 results on 2165 pages for 'default value'.

Page 351/2165 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Setting up a Carousel Component in ADF Mobile

    - by Shay Shmeltzer
    The Carousel component is one of the slickier ways of showing collections of data, and on a mobile device it works really great with the finger swipe gesture. Using the Carousel component in ADF Mobile is similar to using it in regular web ADF applications, with one major change - right now you can't drag a collection from the data control palette and drop it as a carousel. So here is a quick work around for that, and details about setting up carousels in your application. First thing you'll need is a data control that returns an array of records. In my demo I'm using the Emps collection that you can get from following this tutorial. Then you drag the emps and drop it in your amx page as an ADF mobile iterator. We are doing this as a short cut to getting the right binding needed for a carousel in our page. If you look now in your page's binding you'll see something like this: You can now remark the whole iterator code in your page's source. Next let's add the carousel From the component palette drag the carousel (from the data view category) to the page. Next drag a carousel item and drop it in the nodestamp facet of the carousel. Now we'll hook up the carousel to the binding we got from the iterator - this is quite simple just copy the var and value attributes from the iterator tag to the carousel tag: var="row" value="#{bindings.emps.collectionModel}" Next drop a panelForm, or another layout panel in to the carousel item. Into that panelForm you can now drop items and bind their value property to row.attributeNames - basically copying the way it is in the fields in the iterator for example: value="#{row.hireDate}". By the way you can also copy other attributes like the label. And that's it. Your code should end up looking something like this:     <amx:carousel id="c1" var="row" value="#{bindings.emps.collectionModel}">      <amx:facet name="nodeStamp">        <amx:carouselItem id="ci1">          <amx:panelFormLayout id="pfl1">            <amx:inputText label="#{bindings.emps.hints.salary.label}" value="#{row.salary}" id="it1"/>            <amx:inputText label="#{bindings.emps.hints.name.label}" value="#{row.name}" id="it2"/>          </amx:panelFormLayout>        </amx:carouselItem>      </amx:facet>    </amx:carousel> And when you run your application it will look like this:

    Read the article

  • Agile Executives

    - by Robert May
    Over the years, I have experienced many different styles of software development. In the early days, most of the development was Waterfall development. In the last few years, I’ve become an advocate of Scrum. As I talked about last month, many people have misconceptions about what Scrum really is. The reason why we do Scrum at Veracity is because of the difference it makes in the life of the team doing Scrum. Software is for people, and happy motivated people will build better software. However, not all executives understand Scrum and how to get the information from development teams that use Scrum. I think that these executives need a support system for managing Agile teams. Historical Software Management When Henry Ford pioneered the assembly line, I doubt he realized the impact he’d have on Management through the ages. Historically, management was about managing the process of building things. The people were just cogs in that process. Like all cogs, they were replaceable. Unfortunately, most of the software industry followed this same style of management. Many of today’s senior managers learned how to manage companies before software was a significant influence on how the company did business. Software development is a very creative process, but too many managers have treated it like an assembly line. Idea’s go in, working software comes out, and we just have to figure out how to make sure that the ideas going in are perfect, then the software will be perfect. Lean Manufacturing In the manufacturing industry, Lean manufacturing has revolutionized Henry Ford’s assembly line. Derived from the Toyota process, Lean places emphasis on always providing value for the customer. Anything the customer wouldn’t be willing to pay for is wasteful. Agile is based on similar principles. We’re building software for people, and anything that isn’t useful to them doesn’t add value. Waterfall development would have teams build reams and reams of documentation about how the software should work. Agile development dispenses with this work because excessive documentation doesn’t add value. Instead, teams focus on building documentation only when it truly adds value to the customer. Many other Agile principals are similar. Playing Catch-up Just like in the manufacturing industry, many managers in the software industry have yet to understand the value of the principles of Lean and Agile. They think they can wrap the uncertainties of software development up in a nice little package and then just execute, usually followed by failure. They spend a great deal of time and money trying to exactly predict the future. That expenditure of time and money doesn’t add value to the customer. Managers that understand that Agile know that there is a better way. They will instead focus on the priorities of the near term in detail, and leave the future to take care of itself. They have very detailed two week plans with less detailed quarterly plans. These plans are guided by a general corporate strategy that doesn’t focus on the exact implementation details. These managers also think in smaller features rather than large functionality. This adds a great deal of value to customers, since the features that matter most are the ones that the team focuses on in the near term and then are able to deliver to the customers that are paying for them. Agile managers also realize that stale software is very costly. They know that keeping the technology in their software current is much less expensive and risky than large rewrites that occur infrequently and schedule time in each release for refactoring of the existing software. Agile Executives Even though Agile is a better way, I’ve still seen failures using the Agile process. While some of these failures can be attributed to the team, most of them are caused by managers, not the team. Managers fail to understand what Agile is, how it works, and how to get the information that they need to make good business decisions. I think this is a shame. I’m very pleased that Veracity understands this problem and is trying to do something about it. Veracity is a key sponsor of Agile Executives. In fact, Galen is this year’s acting president for Agile Executives. The purpose of Agile Executives is to help managers better manage Agile teams and see better success. Agile Executives is trying to build a community of executives that range from managers interested in Agile to managers that have successfully adopted Agile. Together, these managers can form a community of support and ideas that will help make Agile teams more successful. Helping Your Team You can help too! Talk with your manager and get them involved in Agile Executives. Help Veracity build the community. If your manager understands Agile better, he’ll understand how to help his teams, which will result in software that adds more value for customers. If you have any questions about how you can be involved, please let me know. Technorati Tags: Agile,Agile Executives

    Read the article

  • Hiera datatypes wont load in Puppet

    - by Cole Shores
    I have spent a couple of days on this, followed the instructions on http://downloads.puppetlabs.com/docs/puppetmanual.pdf and even the Puppet Training Advanced Puppet manual. When I run a test against it, the results always come back as 'nil' and Im not sure why. I am running Puppet 3.6.1 Community Edition, with Hiera 1.2.1 on SLES 11. My puppet.conf file at /etc/puppet/puppet.conf consists of: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl certificate_revocation = false [master] hiera_config=/etc/puppet/hiera.yaml reporturl = http://puppet2.vvmedia.com/reports/upload ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY # certname = dev-puppetmaster2.vvmedia.com # ca_name = 'dev-puppetmaster2.vvmedia.com' # facts_terminus = rest # inventory_server = localhost # ca = false [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig my /etc/puppet/hiera.yaml consists of: :backends: yaml :yaml: :datadir: /etc/puppet/hieradata :hierarchy: - common - database I have a directory created in /etc/puppet/hieradata and within it contains: /etc/puppet/hieradata/common.yaml :nameserver: ["dnsserverfoo1", "dnsserverfoo2"] :smtp_server: relay.internalfoo.com :syslog_server: syslogfoo.com :logstash_shipper: logstashfoo.com :syslog_backup_nfs: nfsfoo:/vol/logs :auth_method: ldap :manage_root: true and /etc/puppet/hieradata/database.yaml :enable_graphital: true :mysql_server_package: MySQL-server :mysql_client_package: MySQL-client :allowed_groups_login: extranet_users does anyone have any idea what could be causing Hiera to not load the requested values? I have tried even restarting the Master. Thanks in advance, Cole

    Read the article

  • Keep remoting into wrong account. Windows 7

    - by Paul
    I have a home theatre PC running with two users accounts on windows 7. The default account logs into locally. The account 'Paul' is present but is denied local log in so the default auto logs in locally. I am trying to remote into account Paul using RDC however it tries to log into the default account and I am presented with an an option to boot the present user off so I can log in. How do I specify which account I want to log into?

    Read the article

  • Alternative of SortedDictionary in Silverlight

    - by Rajneesh Verma
    Hi, As we know SortedDictionary is not not present in Silverlightso to find alternative of this i am using Dictionary as System.Collections.Generic . Dictionary (Of TKey, TValue ) . KeyCollection and for sorting i am using LINQ query. see the full code below. Dim sortedLists As New Dictionary(Of String, Object) Dim query = From sortedList In sortedLists Order By sortedList.Key Ascending Select sortedList.Key, sortedList.Value For Each que In query 'get the key value using que.Key 'get the value using...(read more)

    Read the article

  • XenApp 6.5 – How to create and set a Policy using PowerShell

    - by Waclaw Chrabaszcz
    Originally posted on: http://geekswithblogs.net/Wchrabaszcz/archive/2013/06/20/xenapp-6.5--how-to-create-and-set-a-policy.aspxHere is my homework Add-PSSnapin -name Citrix.Common.* -ErrorAction SilentlyContinueNew-Item LocalFarmGpo:\User\MyPolicycd LocalFarmGpo:\User\MyPolicy\Settings\ICA\SecuritySet-ItemProperty .\MinimumEncryptionLevel State EnabledSet-ItemProperty .\MinimumEncryptionLevel Value Bits128cd LocalFarmGpo:\User\MyPolicy\Filters\WorkerGroupNew-Item -Name "All Servers" -Value "All Servers"Set-ItemProperty LocalFarmGpo:\User\MyPolicy -Name Priority -Value 2  So cute …

    Read the article

  • We've completed the first iteration

    - by CliveT
    There are a lot of features in C# that are implemented by the compiler and not by the underlying platform. One such feature is a lambda expression. Since local variables cannot be accessed once the current method activation finishes, the compiler has to go out of its way to generate a new class which acts as a home for any variable whose lifetime needs to be extended past the activation of the procedure. Take the following example:     Random generator = new Random();     Func func = () = generator.Next(10); In this case, the compiler generates a new class called c_DisplayClass1 which is marked with the CompilerGenerated attribute. [CompilerGenerated] private sealed class c__DisplayClass1 {     // Fields     public Random generator;     // Methods     public int b__0()     {         return this.generator.Next(10);     } } Two quick comments on this: (i)    A display was the means that compilers for languages like Algol recorded the various lexical contours of the nested procedure activations on the stack. I imagine that this is what has led to the name. (ii)    It is a shame that the same attribute is used to mark all compiler generated classes as it makes it hard to figure out what they are being used for. Indeed, you could imagine optimisations that the runtime could perform if it knew that classes corresponded to certain high level concepts. We can see that the local variable generator has been turned into a field in the class, and the body of the lambda expression has been turned into a method of the new class. The code that builds the Func object simply constructs an instance of this class and initialises the fields to their initial values.     c__DisplayClass1 class2 = new c__DisplayClass1();     class2.generator = new Random();     Func func = new Func(class2.b__0); Reflector already contains code to spot this pattern of code and reproduce the form containing the lambda expression, so this is example is correctly decompiled. The use of compiler generated code is even more spectacular in the case of iterators. C# introduced the idea of a method that could automatically store its state between calls, so that it can pick up where it left off. The code can express the logical flow with yield return and yield break denoting places where the method should return a particular value and be prepared to resume.         {             yield return 1;             yield return 2;             yield return 3;         } Of course, there was already a .NET pattern for expressing the idea of returning a sequence of values with the computation proceeding lazily (in the sense that the work for the next value is executed on demand). This is expressed by the IEnumerable interface with its Current property for fetching the current value and the MoveNext method for forcing the computation of the next value. The sequence is terminated when this method returns false. The C# compiler links these two ideas together so that an IEnumerator returning method using the yield keyword causes the compiler to produce the implementation of an Iterator. Take the following piece of code.         IEnumerable GetItems()         {             yield return 1;             yield return 2;             yield return 3;         } The compiler implements this by defining a new class that implements a state machine. This has an integer state that records which yield point we should go to if we are resumed. It also has a field that records the Current value of the enumerator and a field for recording the thread. This latter value is used for optimising the creation of iterator instances. [CompilerGenerated] private sealed class d__0 : IEnumerable, IEnumerable, IEnumerator, IEnumerator, IDisposable {     // Fields     private int 1__state;     private int 2__current;     public Program 4__this;     private int l__initialThreadId; The body gets converted into the code to construct and initialize this new class. private IEnumerable GetItems() {     d__0 d__ = new d__0(-2);     d__.4__this = this;     return d__; } When the class is constructed we set the state, which was passed through as -2 and the current thread. public d__0(int 1__state) {     this.1__state = 1__state;     this.l__initialThreadId = Thread.CurrentThread.ManagedThreadId; } The state needs to be set to 0 to represent a valid enumerator and this is done in the GetEnumerator method which optimises for the usual case where the returned enumerator is only used once. IEnumerator IEnumerable.GetEnumerator() {     if ((Thread.CurrentThread.ManagedThreadId == this.l__initialThreadId)               && (this.1__state == -2))     {         this.1__state = 0;         return this;     } The state machine itself is implemented inside the MoveNext method. private bool MoveNext() {     switch (this.1__state)     {         case 0:             this.1__state = -1;             this.2__current = 1;             this.1__state = 1;             return true;         case 1:             this.1__state = -1;             this.2__current = 2;             this.1__state = 2;             return true;         case 2:             this.1__state = -1;             this.2__current = 3;             this.1__state = 3;             return true;         case 3:             this.1__state = -1;             break;     }     return false; } At each stage, the current value of the state is used to determine how far we got, and then we generate the next value which we return after recording the next state. Finally we return false from the MoveNext to signify the end of the sequence. Of course, that example was really simple. The original method body didn't have any local variables. Any local variables need to live between the calls to MoveNext and so they need to be transformed into fields in much the same way that we did in the case of the lambda expression. More complicated MoveNext methods are required to deal with resources that need to be disposed when the iterator finishes, and sometimes the compiler uses a temporary variable to hold the return value. Why all of this explanation? We've implemented the de-compilation of iterators in the current EAP version of Reflector (7). This contrasts with previous version where all you could do was look at the MoveNext method and try to figure out the control flow. There's a fair amount of things we have to do. We have to spot the use of a CompilerGenerated class which implements the Enumerator pattern. We need to go to the class and figure out the fields corresponding to the local variables. We then need to go to the MoveNext method and try to break it into the various possible states and spot the state transitions. We can then take these pieces and put them back together into an object model that uses yield return to show the transition points. After that Reflector can carry on optimising using its usual optimisations. The pattern matching is currently a little too sensitive to changes in the code generation, and we only do a limited analysis of the MoveNext method to determine use of the compiler generated fields. In some ways, it is a pity that iterators are compiled away and there is no metadata that reflects the original intent. Without it, we are always going to dependent on our knowledge of the compiler's implementation. For example, we have noticed that the Async CTP changes the way that iterators are code generated, so we'll have to do some more work to support that. However, with that warning in place, we seem to do a reasonable job of decompiling the iterators that are built into the framework. Hopefully, the EAP will give us a chance to find examples where we don't spot the pattern correctly or regenerate the wrong code, and we can improve things. Please give it a go, and report any problems.

    Read the article

  • debian/ubuntu locales and language settings

    - by AndreasT
    This self-answered question solves the following issues: locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory and some other locale related problems.

    Read the article

  • Code contracts and inheritance

    - by DigiMortal
    In my last posting about code contracts I introduced you how to force code contracts to classes through interfaces. In this posting I will go step further and I will show you how code contracts work in the case of inherited classes. As a first thing let’s take a look at my interface and code contracts. [ContractClass(typeof(ProductContracts))] public interface IProduct {     int Id { get; set; }     string Name { get; set; }     decimal Weight { get; set; }     decimal Price { get; set; } }   [ContractClassFor(typeof(IProduct))] internal sealed class ProductContracts : IProduct {     private ProductContracts() { }       int IProduct.Id     {         get         {             return default(int);         }         set         {             Contract.Requires(value > 0);         }     }       string IProduct.Name     {         get         {             return default(string);         }         set         {             Contract.Requires(!string.IsNullOrWhiteSpace(value));             Contract.Requires(value.Length <= 25);         }     }       decimal IProduct.Weight     {         get         {             return default(decimal);         }         set         {             Contract.Requires(value > 3);             Contract.Requires(value < 100);         }     }       decimal IProduct.Price     {         get         {             return default(decimal);         }         set         {             Contract.Requires(value > 0);             Contract.Requires(value < 100);         }     } } And here is the product class that inherits IProduct interface. public class Product : IProduct {     public int Id { get; set; }     public string Name { get; set; }     public virtual decimal Weight { get; set; }     public decimal Price { get; set; } } if we run this code and violate the code contract set to Id we will get ContractException. public class Program {     static void Main(string[] args)     {         var product = new Product();         product.Id = -100;     } }   Now let’s make Product to be abstract class and let’s define new class called Food that adds one more contract to Weight property. public class Food : Product {     public override decimal Weight     {         get         {             return base.Weight;         }         set         {             Contract.Requires(value > 1);             Contract.Requires(value < 10);               base.Weight = value;         }     } } Now we should have the following rules at place for Food: weight must be greater than 1, weight must be greater than 3, weight must be less than 100, weight must be less than 10. Interesting part is what happens when we try to violate the lower and upper limits of Food weight. To see what happens let’s try to violate rules #2 and #4. Just comment one of the last lines out in the following method to test another assignment. public class Program {     static void Main(string[] args)     {         var food = new Food();         food.Weight = 12;         food.Weight = 2;     } } And here are the results as pictures to see where exceptions are thrown. Click on images to see them at original size. Violation of lower limit. Violation of upper limit. As you can see for both violations we get ContractException like expected. Code contracts inheritance is powerful and at same time dangerous feature. Although you can always narrow down the conditions that come from more general classes it is possible to define impossible or conflicting contracts at different points in inheritance hierarchy.

    Read the article

  • Clear list of recent repositories in Git Extensions

    - by Marko Apfel
    Orphaned and wrong specified repositories in the recent list are annoying. Straightaway I does not found an option to clean this entries. And also not the persistence place for that. So it was time for Process Explorer. The storage happens under: HKEY_CURRENT_USER\Software\GitExtensions\GitExtensions\1.0.0.0 in the string value “history” You could edit the content of the string value or delete it – than during restarting Git Extensions the string value will be created with a default skeleton.

    Read the article

  • EM12c: Using the LIST verb in emcli

    - by SubinDaniVarughese
    Many of us who use EM CLI to write scripts and automate our daily tasks should not miss out on the new list verb released with Oracle Enterprise Manager 12.1.0.3.0. The combination of list and Jython based scripting support in EM CLI makes it easier to achieve automation for complex tasks with just a few lines of code. Before I jump into a script, let me highlight the key attributes of the list verb and why it’s simply excellent! 1. Multiple resources under a single verb:A resource can be set of users or targets, etc. Using the list verb, you can retrieve information about a resource from the repository database.Here is an example which retrieves the list of administrators within EM.Standard mode$ emcli list -resource="Administrators" Interactive modeemcli>list(resource="Administrators")The output will be the same as standard mode.Standard mode$ emcli @myAdmin.pyEnter password :  ******The output will be the same as standard mode.Contents of myAdmin.py scriptlogin()print list(resource="Administrators",jsonout=False).out()To get a list of all available resources use$ emcli list -helpWith every release of EM, more resources are being added to the list verb. If you have a resource which you feel would be valuable then go ahead and contact Oracle Support to log an enhancement request with product development. Be sure to say how the resource is going to help improve your daily tasks. 2. Consistent Formatting:It is possible to format the output of any resource consistently using these options:  –column  This option is used to specify which columns should be shown in the output. Here is an example which shows the list of administrators and their account status$ emcli list -resource="Administrators" -columns="USER_NAME,REPOS_ACCOUNT_STATUS" To get a list of columns in a resource use:$ emcli list -resource="Administrators" -help You can also specify the width of the each column. For example, here the column width of user_type is set to 20 and department to 30. $ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE:20,COST_CENTER,CONTACT,DEPARTMENT:30"This is useful if your terminal is too small or you need to fine tune a list of specific columns for your quick use or improved readability.  –colsize  This option is used to resize column widths.Here is the same example as above, but using -colsize to define the width of user_type to 20 and department to 30.$ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE,COST_CENTER,CONTACT,DEPARTMENT" -colsize="USER_TYPE:20,DEPARTMENT:30" The existing standard EMCLI formatting options are also available in list verb. They are: -format="name:pretty" | -format="name:script” | -format="name:csv" | -noheader | -scriptThere are so many uses depending on your needs. Have a look at the resources and columns in each resource. Refer to the EMCLI book in EM documentation for more information.3. Search:Using the -search option in the list verb makes it is possible to search for a specific row in a specific column within a resource. This is similar to the sqlplus where clause. The following operators are supported:           =           !=           >           <           >=           <=           like           is (Must be followed by null or not null)Here is an example which searches for all EM administrators in the marketing department located in the USA.$emcli list -resource="Administrators" -search="DEPARTMENT ='Marketing'" -search="LOCATION='USA'" Here is another example which shows all the named credentials created since a specific date.  $emcli list -resource=NamedCredentials -search="CredCreatedDate > '11-Nov-2013 12:37:20 PM'"Note that the timestamp has to be in the format DD-MON-YYYY HH:MI:SS AM/PM Some resources need a bind variable to be passed to get output. A bind variable is created in the resource and then referenced in the command. For example, this command will list all the default preferred credentials for target type oracle_database.Here is an example$ emcli list -resource="PreferredCredentialsDefault" -bind="TargetType='oracle_database'" -colsize="SetName:15,TargetType:15" You can provide multiple bind variables. To verify if a column is searchable or requires a bind variable, use the –help option. Here is an example:$ emcli list -resource="PreferredCredentialsDefault" -help 4. Secure accessWhen list verb collects the data, it only displays content for which the administrator currently logged into emcli, has access. For example consider this usecase:AdminA has access only to TargetA. AdminA logs into EM CLIExecuting the list verb to get the list of all targets will only show TargetA.5. User defined SQLUsing the –sql option, user defined sql can be executed. The SQL provided in the -sql option is executed as the EM user MGMT_VIEW, which has read-only access to the EM published MGMT$ database views in the SYSMAN schema. To get the list of EM published MGMT$ database views, go to the Extensibility Programmer's Reference book in EM documentation. There is a chapter about Using Management Repository Views. It’s always recommended to reference the documentation for the supported MGMT$ database views.  Consider you are using the MGMT$ABC view which is not in the chapter. During upgrade, it is possible, since the view was not in the book and not supported, it is likely the view might undergo a change in its structure or the data in it. Using a supported view ensures that your scripts using -sql will continue working after upgrade.Here’s an example  $ emcli list -sql='select * from mgmt$target' 6. JSON output support    JSON (JavaScript Object Notation) enables data to be displayed in a collection of name/value pairs. There is lot of reading material about JSON on line for more information.As an example, we had a requirement where an EM administrator had many 11.2 databases in their test environment and the developers had requested an Administrator to change the lifecycle status from Test to Production which meant the admin had to go to the EM “All targets” page and identify the set of 11.2 databases and then to go into each target database page and manually changes the property to Production. Sounds easy to say, but this Administrator had numerous targets and this task is repeated for every release cycle.We told him there is an easier way to do this with a script and he can reuse the script whenever anyone wanted to change a set of targets to a different Lifecycle status. Here is a jython script which uses list and JSON to change all 11.2 database target’s LifeCycle Property value.If you are new to scripting and Jython, I would suggest visiting the basic chapters in any Jython tutorials. Understanding Jython is important to write the logic depending on your usecase.If you are already writing scripts like perl or shell or know a programming language like java, then you can easily understand the logic.Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here.  1 from emcli import *  2  search_list = ['PROPERTY_NAME=\'DBVersion\'','TARGET_TYPE= \'oracle_database\'','PROPERTY_VALUE LIKE \'11.2%\'']  3 if len(sys.argv) == 2:  4    print login(username=sys.argv[0])  5    l_prop_val_to_set = sys.argv[1]  6      l_targets = list(resource="TargetProperties", search=search_list,   columns="TARGET_NAME,TARGET_TYPE,PROPERTY_NAME")  7    for target in l_targets.out()['data']:  8       t_pn = 'LifeCycle Status'  9      print "INFO: Setting Property name " + t_pn + " to value " +       l_prop_val_to_set + " for " + target['TARGET_NAME']  10      print  set_target_property_value(property_records=      target['TARGET_NAME']+":"+target['TARGET_TYPE']+":"+      t_pn+":"+l_prop_val_to_set)  11  else:  12   print "\n ERROR: Property value argument is missing"  13   print "\n INFO: Format to run this file is filename.py <username>   <Database Target LifeCycle Status Property Value>" You can download the script from here. I could not upload the file with .py extension so you need to rename the file to myScript.py before executing it using emcli.A line by line explanation for beginners: Line  1 Imports the emcli verbs as functions  2 search_list is a variable to pass to the search option in list verb. I am using escape character for the single quotes. In list verb to pass more than one value for the same option, you should define as above comma separated values, surrounded by square brackets.  3 This is an “if” condition to ensure the user does provide two arguments with the script, else in line #15, it prints an error message.  4 Logging into EM. You can remove this if you have setup emcli with autologin. For more details about setup and autologin, please go the EM CLI book in EM documentation.  5 l_prop_val_to_set is another variable. This is the property value to be set. Remember we are changing the value from Test to Production. The benefit of this variable is you can reuse the script to change the property value from and to any other values.  6 Here the output of the list verb is stored in l_targets. In the list verb I am passing the resource as TargetProperties, search as the search_list variable and I only need these three columns – target_name, target_type and property_name. I don’t need the other columns for my task.  7 This is a for loop. The data in l_targets is available in JSON format. Using the for loop, each pair will now be available in the ‘target’ variable.  8 t_pn is the “LifeCycle Status” variable. If required, I can have this also as an input and then use my script to change any target property. In this example, I just wanted to change the “LifeCycle Status”.  9 This a message informing the user the script is setting the property value for dbxyz.  10 This line shows the set_target_property_value verb which sets the value using the property_records option. Once it is set for a target pair, it moves to the next one. In my example, I am just showing three dbs, but the real use is when you have 20 or 50 targets. The script is executed as:$ emcli @myScript.py subin Production The recommendation is to first test the scripts before running it on a production system. We tested on a small set of targets and optimizing the script for fewer lines of code and better messaging.For your quick reference, the resources available in Enterprise Manager 12.1.0.4.0 with list verb are:$ emcli list -helpWatch this space for more blog posts using the list verb and EM CLI Scripting use cases. I hope you enjoyed reading this blog post and it has helped you gain more information about the list verb. Happy Scripting!!Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter mt=8">Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • Seizing naming master from child domain server

    - by meera
    when I am trying to seize the role from my child domain server the naming master I get the following error fsmo maintenance: seize naming master Attempting safe transfer of domain naming FSMO before seizure. ldap_modify_sW error 0x34(52 (Unavailable). Ldap extended error message is 000020AF: SvcErr: DSID-03210380, problem 5002 (UN AVAILABLE), data 8438 Win32 error returned is 0x20af(The requested FSMO operation failed. The current FSMO holder could not be contacted.) ) Depending on the error code this may indicate a connection, ldap, or role transfer error. Transfer of domain naming FSMO failed, proceeding with seizure ... Server "win-fb20ixk90mu" knows about 5 roles Schema - CN=NTDS Settings,CN=WIN-3918XHC5STU,CN=Servers,CN=Default-First-Site-Na me,CN=Sites,CN=Configuration,DC=HCL,DC=com Naming Master - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First- Site-Name,CN=Sites,CN=Configuration,DC=HCL,DC=com PDC - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First-Site-Name, CN=Sites,CN=Configuration,DC=HCL,DC=com RID - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First-Site-Name, CN=Sites,CN=Configuration,DC=HCL,DC=com Infrastructure - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First -Site-Name,CN=Sites,CN=Configuration,DC=HCL,DC=com

    Read the article

  • Database Rebuild

    - by Robert May
    I promised I’d have a simpler mechanism for rebuilding the database.  Below is a complete MSBuild targets file for rebuilding the database from scratch.  I don’t know if I’ve explained the rational for this.  The reason why you’d WANT to do this is so that each developer has a clean version of the database on their local machine.  This also includes the continuous integration environment.  Basically, you can do whatever you want to the database without fear, and in a minute or two, have a completely rebuilt database structure. DBDeploy (including the KTSC build task for dbdeploy) is used in this script to do change tracking on the database itself.  The MSBuild ExtensionPack is used in this target file.  You can get an MSBuild DBDeploy task here. There are two database scripts that you’ll see below.  First is the task for creating an admin (dbo) user in the system.  This script looks like the following: USE [master] GO If not Exists (select Name from sys.sql_logins where name = '$(User)') BEGIN CREATE LOGIN [$(User)] WITH PASSWORD=N'$(Password)', DEFAULT_DATABASE=[$(DatabaseName)], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF END GO EXEC master..sp_addsrvrolemember @loginame = N'$(User)', @rolename = N'sysadmin' GO USE [$(DatabaseName)] GO CREATE USER [$(User)] FOR LOGIN [$(User)] GO ALTER USER [$(User)] WITH DEFAULT_SCHEMA=[dbo] GO EXEC sp_addrolemember N'db_owner', N'$(User)' GO The second creates the changelog table.  This script can also be found in the dbdeploy.net install\scripts directory. CREATE TABLE changelog ( change_number INTEGER NOT NULL, delta_set VARCHAR(10) NOT NULL, start_dt DATETIME NOT NULL, complete_dt DATETIME NULL, applied_by VARCHAR(100) NOT NULL, description VARCHAR(500) NOT NULL ) GO ALTER TABLE changelog ADD CONSTRAINT Pkchangelog PRIMARY KEY (change_number, delta_set) GO Finally, Here’s the targets file. <Projectxmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0" DefaultTargets="Update">   <PropertyGroup>     <DatabaseName>TestDatabase</DatabaseName>     <Server>localhost</Server>     <ScriptDirectory>.\Scripts</ScriptDirectory>     <RebuildDirectory>.\Rebuild</RebuildDirectory>     <TestDataDirectory>.\TestData</TestDataDirectory>     <DbDeploy>.\DBDeploy</DbDeploy>     <User>TestUser</User>     <Password>TestPassword</Password>     <BCP>bcp</BCP>     <BCPOptions>-S$(Server) -U$(User) -P$(Password) -N -E -k</BCPOptions>     <OutputFileName>dbDeploy-output.sql</OutputFileName>     <UndoFileName>dbDeploy-output-undo.sql</UndoFileName>     <LastChangeToApply>99999</LastChangeToApply>   </PropertyGroup>     <ImportProject="$(MSBuildExtensionsPath)\ExtensionPack\4.0\MSBuild.ExtensionPack.tasks"/>   <UsingTask TaskName="Ktsc.Build.DBDeploy" AssemblyFile="$(DbDeploy)\Ktsc.Build.dll"/>   <ItemGroup>     <VariableInclude="DatabaseName">       <Value>$(DatabaseName)</Value>     </Variable>     <VariableInclude="Server">       <Value>$(Server)</Value>     </Variable>     <VariableInclude="User">       <Value>$(User)</Value>     </Variable>     <VariableInclude="Password">       <Value>$(Password)</Value>     </Variable>   </ItemGroup>     <TargetName="Rebuild">     <!--Take the database offline to disconnect any users. Requires that the current user is an admin of the sql server machine.-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd Variables="@(Variable)" Database="$(DatabaseName)" TaskAction="Execute" CommandLineQuery ="ALTER DATABASE $(DatabaseName) SET OFFLINE WITH ROLLBACK IMMEDIATE"/>         <!--Bring it back online.  If you don't, the database files won't be deleted.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="SetOnline" DatabaseItem="$(DatabaseName)"/>     <!--Delete the database, removing the existing files.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="Delete" DatabaseItem="$(DatabaseName)"/>     <!--Create the new database in the default database path location.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="Create" DatabaseItem="$(DatabaseName)" Force="True"/>         <!--Create admin user-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" InputFiles="$(RebuildDirectory)\0002 Create Admin User.sql" Variables="@(Variable)" />     <!--Create the dbdeploy changelog.-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" LogOn="$(User)" Password="$(Password)" InputFiles="$(RebuildDirectory)\0003 Create Changelog.sql" Variables="@(Variable)" />     <CallTarget Targets="Update;ImportData"/>     </Target>    <TargetName="Update" DependsOnTargets="CreateUpdateScript">     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" LogOn="$(User)" Password="$(Password)" InputFiles="$(OutputFileName)" Variables="@(Variable)" />   </Target>   <TargetName="CreateUpdateScript">     <ktsc.Build.DBDeploy DbType="mssql"                                        DbConnection="User=$(User);Password=$(Password);Data Source=$(Server);Initial Catalog=$(DatabaseName);"                                        Dir="$(ScriptDirectory)"                                        OutputFile="..\$(OutputFileName)"                                        UndoOutputFile="..\$(UndoFileName)"                                        LastChangeToApply="$(LastChangeToApply)"/>   </Target>     <TargetName="ImportData">     <ItemGroup>       <TestData Include="$(TestDataDirectory)\*.dat"/>     </ItemGroup>     <ExecCommand="$(BCP) $(DatabaseName).dbo.%(TestData.Filename) in&quot;%(TestData.Identity)&quot;$(BCPOptions)"/>   </Target> </Project> Technorati Tags: MSBuild

    Read the article

  • Change sysout logging level for Weblogic

    - by Justin Voss
    When I run a local copy of Weblogic, I like to see the output in the console so that I can observe my app's logging messages. But, Weblogic spits out a lot of log messages I don't care about, like these: [ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)' 08-29-2010 01:02:21 INFO Getting a JNDI connection [ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)' 08-29-2010 01:02:21 INFO Connection Returned. Elapsed time to acquire=0ms. [ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)' 08-29-2010 01:02:21 INFO Getting a JNDI connection [ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)' 08-29-2010 01:02:21 INFO Connection Returned. Elapsed time to acquire=0ms. Can I configure Weblogic to not output those? I assume that I can change the logging level to something higher than INFO and that should fix it?

    Read the article

  • Translating debian network configuration to gentoo

    - by thpetrus
    I just got rid off Debian on my VPS (OpenVZ) and installed Gentoo on it, however it is a plain Gentoo image without further configuration, i.e. no working network. I'm not familiar with Debian and coulnd't figure out how to get the network set up, these are the debian network files /etc/network/interfaces: auto venet0 iface venet0 inet manual up ifconfig venet0 up up ifconfig venet0 127.0.0.2 up route add default dev venet0 down route del default dev venet0 down ifconfig venet0 down iface venet0 inet6 manual up ifconfig venet0 add ipv6addr/128 down ifconfig venet0 del ipv6addr/128 up route -A inet6 add default dev venet0 down route -A inet6 del default dev venet0 auto venet0:0 iface venet0:0 inet static address external_ip netmask 255.255.255.255 auto venet0:1 iface venet0:1 inet static address internal_ip netmask 255.255.255.255 Please note that external_ip, internal_ip and ipv6addr are placeholders. I copied the /etc/resolv.conf, know the gateway_ip and also have another ouput of ifconfig, if necessary. This is what I came up with, /etc/conf.d/net: config_venet0="127.0.0.2 netmask 255.255.255.255 brd 0.0.0.0" config_venet0:0="external_ip netmask 255.255.255.255 brd 0.0.0.0" route_venet0:0="default via gateway_ip" config_venet0:1="internal_ip netmask 255.255.255.255 brd 0.0.0.0" Broadcast IP is taken from ifconfig debian output - however it doesn't work. A symbolic link net.venet0:0 -> net.lo in /etc/init.d/ was created and I added net.venet0:0 to the boot runlevel.

    Read the article

  • shorewall masquerading from tun0 to ppp0

    - by damir
    First interface is ppp0 (pptp vpn) Second inteface is tun0 (openvpn) Third interface eth0 (default gw interface) Openvpn is set to change default route on client for all packets to go through tun0 vpn, that part is working ok. I would like to make all packets from tun0 go to ppp0 and get out from that interface (MASQ) but somehow they always end up on eth0 (default gw interface) /etc/shorewall/masq ppp0 tun0 doesn't seem to work

    Read the article

  • php handler as cgi/suexec

    - by sujit
    in my phpinfo it is showing php handler as cgi/fastcgi but i want to change it to cgi/suexec. i tried from whm php and suexec configuration and i found that suphp is default php haldler then why phpinfo is showing cgi/fastcgi as default handler.i want tochange to cgi/suexec as fastcgi is not working with php apc handler. output of phpinfo Server API CGI/FastCGI however whm shows Configure PHP and SuExec New Configuration Option Configured Value Default PHP Version (.php files) 5 PHP 5 Handler suphp PHP 4 Handler none Suexec on

    Read the article

  • Adaptive Layout for ADF Faces on Tablets

    - by Shay Shmeltzer
    In the 11.1.16 version of Oracle ADF we started adding specific features to the ADF Faces components so they'll work better on iPad tablets. In this entry I'm going to highlight some new capabilities that we have added to the 11.1.2.3 release. (note if you are still on the 11.1.1.* branch - you'll need to wait for 11.1.1.7 to get the features discussed here). The two key additions in the 11.1.2.3 version compared to the 11.1.1.6 features for iPad support include: pagination for tables and adaptive flow layout. The pagination for table is self explanatory, basically since iPad don't support scroll bars, we automatically switch the table component to render with a pagination toolbar that allow you to scroll set of records or directly jump to a specific set. See the image below. The adaptive flow layout takes a bit more explanation. On regular desktops the UI that you usually build for ADF Faces screens is going to use stretch layout - meaning that it stretches to fill the whole area of the browser window. If you resize the browser windoe, the ADF Faces page resizes with it. If your browser window is too small, scroll bars will appear to allow you to scroll to areas that are "hidden". However on an iPad, this is probably not the type of layout you want - you would rather have a flow layout that eliminates scroll bars and instead allows you to scroll down the page. Basically your want the page to be sized based on its content, rather then based on the browser window size. In ADF Faces terminology this can be done with the dimensionsFrom property set to "children". And here comes the tricky part, since in the past(and also today) when you create an ADF Faces page and add a stretchable component to it, the dimensionsFrom property is set to parent by default. This will be true to other layout components you'll add as well. At this point you might be wondering "Does this mean I'll need to go to each of the layout components in my page and modify the dimensionsFrom property value to be children?" ADF Faces to the rescue... To eliminate the need to do this tedious manual changes, we introduced a new web.xml parameter "oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS" You'll basically add the following to your web.xml <context-param>    <description>      This parameter controls the default value for component geometry on the page.      Supported values are:        legacy - component attributes use the default values as specified for the attributes                 in the tag documentation (default value)        auto   - component attributes use the correct default value given the value of their                 parent component. For example, with this setting, the panelStretchLayout                 will use "auto" as the default value for its "dimensionsFrom" attribute                 instead of "parent".    </description>    <param-name>oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS</param-name>    <param-value>auto</param-value>  </context-param> Once you set this parameter, you only need to set the dimensionsFrom attribute for the top level layout component on your page, and the rest of the components will adjust accordingly. One trick that you can use, and that is used in the demo below, is to have the dimensionsFrom property depend on the type of client that access your application. This way you can switch between stretch or flow layout based on the device accessing your application. For example I use the following in my page: <af:panelStretchLayout topHeight="70px" startWidth="0px" endWidth="0px"                                       dimensionsFrom="#{adfFacesContext.agent.capabilities['touchScreen'] eq 'none'  ? 'parent' : 'children' }"> Which results in a flow layout for iPads and a stretch layout for regular browsers. Check out the result in the below demo: &amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;

    Read the article

  • Timeout option not working on efi windows 7/windows8 dual boot machine

    - by Guenter
    I hav a gigbyte GA-Z77m-D3h mobo and installed Windows 8 Pro and Windows 7 Ultimate on two SSDs (in that order) in EFI mode. Now when I start my computer, I get the windows boot menu (text mode) with the two OSses to choose, but I have to manually press RETURN to have the computer boot into the Win OS. Even if I wait an hour, no default action takes place. Using bcdedit (from either of the OSses) I can successfully change the time out value, and it shows up in the bcdedit (no params) output. But it doesn't fire ... Here is my current BCDEdit output (headers are in German, but values should be readable): Windows-Start-Manager --------------------- Bezeichner {bootmgr} device partition=O: path \EFI\Microsoft\Boot\bootmgfw.efi description Windows Boot Manager locale de-DE inherit {globalsettings} integrityservices Enable default {default} resumeobject {5ad2802c-c60a-11e2-acdb-80331c501b11} displayorder {default} {current} {5ad2802a-c60a-11e2-acdb-80331c501b11} {5ad28028-c60a-11e2-acdb-80331c501b11} {5ad28029-c60a-11e2-acdb-80331c501b11} toolsdisplayorder {memdiag} timeout 5 displaybootmenu Yes Windows-Startladeprogramm ------------------------- Bezeichner {default} device partition=W: path \Windows\system32\winload.efi description Windows 7 locale de-DE inherit {bootloadersettings} recoverysequence {5ad2802e-c60a-11e2-acdb-80331c501b11} recoveryenabled Yes osdevice partition=W: systemroot \Windows resumeobject {5ad2802c-c60a-11e2-acdb-80331c501b11} nx OptIn Windows-Startladeprogramm ------------------------- Bezeichner {current} device partition=C: path \Windows\system32\winload.efi description Windows 8 locale de-DE inherit {bootloadersettings} recoverysequence {5ad28033-c60a-11e2-acdb-80331c501b11} integrityservices Enable recoveryenabled Yes isolatedcontext Yes allowedinmemorysettings 0x15000075 osdevice partition=C: systemroot \Windows resumeobject {5ad28031-c60a-11e2-acdb-80331c501b11} nx OptIn bootmenupolicy Standard hypervisorlaunchtype Auto (this output is from Win8; the Win7 looks nearly identical) If maybe the problem comes from a bad EFI Windows boot manager installation, can this be fixed without loosing my windows installations?

    Read the article

  • Ping Access to a Nested VM !! Plz Help

    - by Shivaramakrishnan
    I have a problem in communication between the host and the nested VM.This is my layout. I have installed KVM on host machine having a single nic interface (public ip) running Ubuntu.On top of this,I have VM running Ubuntu.I have installed KVM in this VM too.I then have a VM inside this running a web server. I am able to ping the host from this web server VM and ssh into it.But from host to VM ,ping is being unsuccessful. The VM (named L1hyp) on host was created using libvirt-manager and has IP of 192.168.122.8. The vswitch interface created at host is in default config (NAT-ed). Its IP is 192.168.122.1. Now this VM is also having a vswitch interface which is in default config (NAT-ed).Its IP is 192.168.100.1. The Web server VM is created on top of this L1hyp VM, is having an IP of 192.168.100.186. The Webserver VM uses 192.168.100.1 as its default gw. The L1hyp uses 192.168.122.1 as its default gw. From Host: ping 192.168.122.8 - SUCCEEDS ping 192.168.122.1 - SUCCEEDS ping 192.168.100.1 - SUCCEEDS ping 192.168.100.186 - FAILS Comes up with Destination Host Unreachable From 192.168.122.1. But there is route to 192.168.100.0/24 subnet from host.Ping to 192.168.100.1 succeeds. From Webserver VM: ping 192.168.100.1 - SUCCEEDS ping 192.168.122.1 - SUCCEEDS SSH from web server VM to host succeeds. Can anyone help me out what needs to be modified to have two way communication between the host and Webserver VM at the earliest? I am pondering over this problem for over a week now.

    Read the article

  • Why is ntpd not updating the time on my server?

    - by John
    I have ntpd running on my server. It's all the default settings, except I commented out its ability to be a server to other machines: # restrict -4 default kod notrap nomodify nopeer noquery # restrict -6 default kod notrap nomodify nopeer noquery restrict default ignore If I run ntpdate -q ntp.ubuntu.com, I'm told that my machine's clock is off by 7 seconds. What's going on? How can I diagnose what's happening, is there a log I can turn on? more info #1 # ntpq -np remote refid st t when poll reach delay offset jitter ============================================================================== 91.189.94.4 193.79.237.14 2 u 30 64 7 108.518 -0.136 0.361 more info #2 Here's what this looked like when I asked the question: # ntpdate -q ntp.ubuntu.com server 91.189.94.4, stratum 2, offset 7.191308, delay 0.13310 10 Jan 20:38:09 ntpdate[31055]: step time server 91.189.94.4 offset 7.191308 sec And here's what it looks like now, after restarting ntpd a couple times (I'm assuming that's what fixed it): # ntpdate -q ntp.ubuntu.com server 91.189.94.4, stratum 2, offset 0.000112, delay 0.13164 10 Jan 20:47:03 ntpdate[31419]: adjust time server 91.189.94.4 offset 0.000112 sec

    Read the article

  • How To Change Attachment Size in WorkItems in TFS 2010

    - by Ravi
    Recently, I came across an issue where I had to change the size limit for WorkItem Attachments in TFS 2010. I searched all around the internet only to find very little information around it which wasn’t clear honestly. So after breaking my head for sometime, I was successful in doing it. Here are my conclusions and the procedure to do it. 1. You DON’T 'have to' programmatically change it. You can do it directly from IIS webservices. 2. You CAN change it programmatically too, by making an entry into TFS Registry using a small piece of code. Let me show you how it is done from IIS. This is to change the size of attachment to your required value for workItems in TFS 2010 for each collection individually. You must be a TFS Admin to do this ( Login with setup account ) Browse to /WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">http://localhost:8080/tfs/<YOUR-COLLECTION-NAME>/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx You’ll see 3 asmx services – GetMaxAttachmentSize, GetWorkItemTrackingVersion and SetMaxAttachmentSize. 4. To know what is the current value of the maximum attachment size for a collection, click on the first service and you’ll the current existing value for this particular collection when you click on ‘invoke’ button. ( value is in bytes ) 5. Now click on the ‘SetMaxAttachmentSize’ webservice and fill in the value of your choice. 6. Reset IIS ( not required honestly, but I did it, just to be sure ) 7. Now try attaching a file greater than the size you’ve set. It’ll fail successfully   Below is the error which you’d see in such scenarios. Let me know if you see any issues & I’ll be happy to help..!

    Read the article

  • What is the simplest human readable configuration file format?

    - by Juha
    Current configuration file is as follows: mainwindow.title = 'test' mainwindow.position.x = 100 mainwindow.position.y = 200 mainwindow.button.label = 'apply' mainwindow.button.size.x = 100 mainwindow.button.size.y = 30 logger.datarate = 100 logger.enable = True logger.filename = './test.log' This is read with python to a nested dictionary: { 'mainwindow':{ 'button':{ 'label': {'value':'apply'}, ... }, 'logger':{ datarate: {'value': 100}, enable: {'value': True}, filename: {'value': './test.log'} }, ... } Is there a better way of doing this? The idea is to get XML type of behavior and avoid XML as long as possible. The end user is assumed almost totally computer illiterate and basically uses notepad and copy-paste. Thus the python standard "header + variables" type is considered too difficult. The dummy user edits the config file, able programmers handle the dictionaries. Nested dictionary is chosen for easy splitting (logger does not need or even cannot have/edit mainwindow parameters).

    Read the article

  • Loosely coupled .NET Cache Provider using Dependency Injection

    - by Rhames
    I have recently been reading the excellent book “Dependency Injection in .NET”, written by Mark Seemann. I do not generally buy software development related books, as I never seem to have the time to read them, but I have found the time to read Mark’s book, and it was time well spent I think. Reading the ideas around Dependency Injection made me realise that the Cache Provider code I wrote about earlier (see http://geekswithblogs.net/Rhames/archive/2011/01/10/using-the-asp.net-cache-to-cache-data-in-a-model.aspx) could be refactored to use Dependency Injection, which should produce cleaner code. The goals are to: Separate the cache provider implementation (using the ASP.NET data cache) from the consumers (loose coupling). This will also mean that the dependency on System.Web for the cache provider does not ripple down into the layers where it is being consumed (such as the domain layer). Provide a decorator pattern to allow a consumer of the cache provider to be implemented separately from the base consumer (i.e. if we have a base repository, we can decorate this with a caching version). Although I used the term repository, in reality the cache consumer could be just about anything. Use constructor injection to provide the Dependency Injection, with a suitable DI container (I use Castle Windsor). The sample code for this post is available on github, https://github.com/RobinHames/CacheProvider.git ICacheProvider In the sample code, the key interface is ICacheProvider, which is in the domain layer. 1: using System; 2: using System.Collections.Generic; 3:   4: namespace CacheDiSample.Domain 5: { 6: public interface ICacheProvider<T> 7: { 8: T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); 9: IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); 10: } 11: }   This interface contains two methods to retrieve data from the cache, either as a single instance or as an IEnumerable. the second paramerter is of type Func<T>. This is the method used to retrieve data if nothing is found in the cache. The ASP.NET implementation of the ICacheProvider interface needs to live in a project that has a reference to system.web, typically this will be the root UI project, or it could be a separate project. The key thing is that the domain or data access layers do not need system.web references adding to them. In my sample MVC application, the CacheProvider is implemented in the UI project, in a folder called “CacheProviders”: 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Caching; 6: using CacheDiSample.Domain; 7:   8: namespace CacheDiSample.CacheProvider 9: { 10: public class CacheProvider<T> : ICacheProvider<T> 11: { 12: public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) 13: { 14: return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry); 15: } 16:   17: public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) 18: { 19: return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry); 20: } 21:   22: #region Helper Methods 23:   24: private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) 25: { 26: U value; 27: if (!TryGetValue<U>(key, out value)) 28: { 29: value = retrieveData(); 30: if (!absoluteExpiry.HasValue) 31: absoluteExpiry = Cache.NoAbsoluteExpiration; 32:   33: if (!relativeExpiry.HasValue) 34: relativeExpiry = Cache.NoSlidingExpiration; 35:   36: HttpContext.Current.Cache.Insert(key, value, null, absoluteExpiry.Value, relativeExpiry.Value); 37: } 38: return value; 39: } 40:   41: private bool TryGetValue<U>(string key, out U value) 42: { 43: object cachedValue = HttpContext.Current.Cache.Get(key); 44: if (cachedValue == null) 45: { 46: value = default(U); 47: return false; 48: } 49: else 50: { 51: try 52: { 53: value = (U)cachedValue; 54: return true; 55: } 56: catch 57: { 58: value = default(U); 59: return false; 60: } 61: } 62: } 63:   64: #endregion 65:   66: } 67: }   The FetchAndCache helper method checks if the specified cache key exists, if it does not, the Func<U> retrieveData method is called, and the results are added to the cache. Using Castle Windsor to register the cache provider In the MVC UI project (my application root), Castle Windsor is used to register the CacheProvider implementation, using a Windsor Installer: 1: using Castle.MicroKernel.Registration; 2: using Castle.MicroKernel.SubSystems.Configuration; 3: using Castle.Windsor; 4:   5: using CacheDiSample.Domain; 6: using CacheDiSample.CacheProvider; 7:   8: namespace CacheDiSample.WindsorInstallers 9: { 10: public class CacheInstaller : IWindsorInstaller 11: { 12: public void Install(IWindsorContainer container, IConfigurationStore store) 13: { 14: container.Register( 15: Component.For(typeof(ICacheProvider<>)) 16: .ImplementedBy(typeof(CacheProvider<>)) 17: .LifestyleTransient()); 18: } 19: } 20: }   Note that the cache provider is registered as a open generic type. Consuming a Repository I have an existing couple of repository interfaces defined in my domain layer: IRepository.cs 1: using System; 2: using System.Collections.Generic; 3:   4: using CacheDiSample.Domain.Model; 5:   6: namespace CacheDiSample.Domain.Repositories 7: { 8: public interface IRepository<T> 9: where T : EntityBase 10: { 11: T GetById(int id); 12: IList<T> GetAll(); 13: } 14: }   IBlogRepository.cs 1: using System; 2: using CacheDiSample.Domain.Model; 3:   4: namespace CacheDiSample.Domain.Repositories 5: { 6: public interface IBlogRepository : IRepository<Blog> 7: { 8: Blog GetByName(string name); 9: } 10: }   These two repositories are implemented in the DataAccess layer, using Entity Framework to retrieve data (this is not important though). One important point is that in the BaseRepository implementation of IRepository, the methods are virtual. This will allow the decorator to override them. The BlogRepository is registered in a RepositoriesInstaller, again in the MVC UI project. 1: using Castle.MicroKernel.Registration; 2: using Castle.MicroKernel.SubSystems.Configuration; 3: using Castle.Windsor; 4:   5: using CacheDiSample.Domain.CacheDecorators; 6: using CacheDiSample.Domain.Repositories; 7: using CacheDiSample.DataAccess; 8:   9: namespace CacheDiSample.WindsorInstallers 10: { 11: public class RepositoriesInstaller : IWindsorInstaller 12: { 13: public void Install(IWindsorContainer container, IConfigurationStore store) 14: { 15: container.Register(Component.For<IBlogRepository>() 16: .ImplementedBy<BlogRepository>() 17: .LifestyleTransient() 18: .DependsOn(new 19: { 20: nameOrConnectionString = "BloggingContext" 21: })); 22: } 23: } 24: }   Now I can inject a dependency on the IBlogRepository into a consumer, such as a controller in my sample code: 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:   7: using CacheDiSample.Domain.Repositories; 8: using CacheDiSample.Domain.Model; 9:   10: namespace CacheDiSample.Controllers 11: { 12: public class HomeController : Controller 13: { 14: private readonly IBlogRepository blogRepository; 15:   16: public HomeController(IBlogRepository blogRepository) 17: { 18: if (blogRepository == null) 19: throw new ArgumentNullException("blogRepository"); 20:   21: this.blogRepository = blogRepository; 22: } 23:   24: public ActionResult Index() 25: { 26: ViewBag.Message = "Welcome to ASP.NET MVC!"; 27:   28: var blogs = blogRepository.GetAll(); 29:   30: return View(new Models.HomeModel { Blogs = blogs }); 31: } 32:   33: public ActionResult About() 34: { 35: return View(); 36: } 37: } 38: }   Consuming the Cache Provider via a Decorator I used a Decorator pattern to consume the cache provider, this means my repositories follow the open/closed principle, as they do not require any modifications to implement the caching. It also means that my controllers do not have any knowledge of the caching taking place, as the DI container will simply inject the decorator instead of the root implementation of the repository. The first step is to implement a BlogRepository decorator, with the caching logic in it. Note that this can reside in the domain layer, as it does not require any knowledge of the data access methods. BlogRepositoryWithCaching.cs 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5:   6: using CacheDiSample.Domain.Model; 7: using CacheDiSample.Domain; 8: using CacheDiSample.Domain.Repositories; 9:   10: namespace CacheDiSample.Domain.CacheDecorators 11: { 12: public class BlogRepositoryWithCaching : IBlogRepository 13: { 14: // The generic cache provider, injected by DI 15: private ICacheProvider<Blog> cacheProvider; 16: // The decorated blog repository, injected by DI 17: private IBlogRepository parentBlogRepository; 18:   19: public BlogRepositoryWithCaching(IBlogRepository parentBlogRepository, ICacheProvider<Blog> cacheProvider) 20: { 21: if (parentBlogRepository == null) 22: throw new ArgumentNullException("parentBlogRepository"); 23:   24: this.parentBlogRepository = parentBlogRepository; 25:   26: if (cacheProvider == null) 27: throw new ArgumentNullException("cacheProvider"); 28:   29: this.cacheProvider = cacheProvider; 30: } 31:   32: public Blog GetByName(string name) 33: { 34: string key = string.Format("CacheDiSample.DataAccess.GetByName.{0}", name); 35: // hard code 5 minute expiry! 36: TimeSpan relativeCacheExpiry = new TimeSpan(0, 5, 0); 37: return cacheProvider.Fetch(key, () => 38: { 39: return parentBlogRepository.GetByName(name); 40: }, 41: null, relativeCacheExpiry); 42: } 43:   44: public Blog GetById(int id) 45: { 46: string key = string.Format("CacheDiSample.DataAccess.GetById.{0}", id); 47:   48: // hard code 5 minute expiry! 49: TimeSpan relativeCacheExpiry = new TimeSpan(0, 5, 0); 50: return cacheProvider.Fetch(key, () => 51: { 52: return parentBlogRepository.GetById(id); 53: }, 54: null, relativeCacheExpiry); 55: } 56:   57: public IList<Blog> GetAll() 58: { 59: string key = string.Format("CacheDiSample.DataAccess.GetAll"); 60:   61: // hard code 5 minute expiry! 62: TimeSpan relativeCacheExpiry = new TimeSpan(0, 5, 0); 63: return cacheProvider.Fetch(key, () => 64: { 65: return parentBlogRepository.GetAll(); 66: }, 67: null, relativeCacheExpiry) 68: .ToList(); 69: } 70: } 71: }   The key things in this caching repository are: I inject into the repository the ICacheProvider<Blog> implementation, via the constructor. This will make the cache provider functionality available to the repository. I inject the parent IBlogRepository implementation (which has the actual data access code), via the constructor. This will allow the methods implemented in the parent to be called if nothing is found in the cache. I override each of the methods implemented in the repository, including those implemented in the generic BaseRepository. Each override of these methods follows the same pattern. It makes a call to the CacheProvider.Fetch method, and passes in the parentBlogRepository implementation of the method as the retrieval method, to be used if nothing is present in the cache. Configuring the Caching Repository in the DI Container The final piece of the jigsaw is to tell Castle Windsor to use the BlogRepositoryWithCaching implementation of IBlogRepository, but to inject the actual Data Access implementation into this decorator. This is easily achieved by modifying the RepositoriesInstaller to use Windsor’s implicit decorator wiring: 1: using Castle.MicroKernel.Registration; 2: using Castle.MicroKernel.SubSystems.Configuration; 3: using Castle.Windsor; 4:   5: using CacheDiSample.Domain.CacheDecorators; 6: using CacheDiSample.Domain.Repositories; 7: using CacheDiSample.DataAccess; 8:   9: namespace CacheDiSample.WindsorInstallers 10: { 11: public class RepositoriesInstaller : IWindsorInstaller 12: { 13: public void Install(IWindsorContainer container, IConfigurationStore store) 14: { 15:   16: // Use Castle Windsor implicit wiring for the block repository decorator 17: // Register the outermost decorator first 18: container.Register(Component.For<IBlogRepository>() 19: .ImplementedBy<BlogRepositoryWithCaching>() 20: .LifestyleTransient()); 21: // Next register the IBlogRepository inmplementation to inject into the outer decorator 22: container.Register(Component.For<IBlogRepository>() 23: .ImplementedBy<BlogRepository>() 24: .LifestyleTransient() 25: .DependsOn(new 26: { 27: nameOrConnectionString = "BloggingContext" 28: })); 29: } 30: } 31: }   This is all that is needed. Now if the consumer of the repository makes a call to the repositories method, it will be routed via the caching mechanism. You can test this by stepping through the code, and seeing that the DataAccess.BlogRepository code is only called if there is no data in the cache, or this has expired. The next step is to add the SQL Cache Dependency support into this pattern, this will be a future post.

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >