Search Results

Search found 22947 results on 918 pages for 'xml schema collection'.

Page 230/918 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • [C#] How do I make hierarchy of objects from two alternating classes?

    - by Millicent
    Here's the scenario: I have two classes ("Page" and "Field"), that are descended from a common class ("Pield"). They represent tags in an XML file and are in the following hierarchy: <page> <field> <page> ... </page> ... </field> ... </page> I.e.: Page and Field objects are in a hierarchy of alternating type (there may be more than one Page or Field to each rung of the hierarchy). Every Field and Page object has a parent property, which points to the respective parent object of the other type. This is not a problem unless the parent-child mechanism is controlled by the base class (Pield), which is shared by the two descended classes (Page and Field). Here is one try, that fails at the line "Pield child = new Pield(pchild, this);": class Pield<T> { private T _pield_parent; ... private void get_children() { ... Pield<Page> child = new Pield<Page>(pchild, this); ... } ... } class Page : Pield<Field> { ... } class Field : Pield<Page> { ... } Any ideas about how to solve this elegantly? Best, Millicent

    Read the article

  • how can i read the integer value from xml document in objective-c

    - by uttam
    this is my xml document, i want to use the value of id and mlsid in in my viewcontroller. how can i read it. <Table> <ID>1</ID> <MLSID>70980420</MLSID> <STREET_NO>776</STREET_NO> <STREET_NAME>Boylston</STREET_NAME> <AreaName>Back Bay</AreaName <Table> i have created one object file book which store the all value of xml data and then i put that objects in the array . now i am able to retrive the string value but not getting the integer value. this is my object class. @interface Book : NSObject { NSInteger ID; NSInteger MLSID //Same name as the Entity Name. } how to trieve the value of MLSID in the viewcontroller. or how to print it.

    Read the article

  • Get node parent of defined type using xpath

    - by IordanTanev
    Hi, i will give an example of the problem i have. My XML is like this <roor> <child Name = "child1"> <node> <element1>Value1</element1> <element2>Value2</element2> </node> </child> <child Name = "child2"> <element1>Value1</element1> <element2>Value2</element2> <element3>Value3</element3> </child> </root> I have xpath expression that returns all "element2" nodes. Then i want to for every node of type "element2" to find the node of type "child" that contains it. The problem is that between these two nodes there can be from 1 to n different nodes so i can't just use "..". Is there something like "//" that will look up instead of down Best Regards, Iordan

    Read the article

  • Return node level of hierarchical xml

    - by Ryan
    In a treeview you can retrieve the level of an item. I am trying to accomplish the same thing with the given input being an object. The XML data I will use for this example would be something like the following <?xml version="1.0" encoding="utf-8" ?> <Testing> <Numbers> <Number val="1"> <Number val="1.1"> <Number val="1.1.1"> <Number val="1.1.2" /> <Number val="1.1.3" /> <Number val="1.1.4" /> </Number> </Number> <Number val="1.2" /> <Number val="1.3" /> <Number val="1.4" /> </Number> <Number val="2" /> <Number val="3" /> <Number val="4" /> </Numbers> <Numbers> <Number val="5" /> <Number val="6" /> <Number val="7" /> <Number val="8" /> </Numbers> </Testing> This one is kicking my butt!

    Read the article

  • create child nodes from sibling nodes until a different sibbling occurs.

    - by user364939
    Hi does anyone know what the xsl would look like to transform this XML. There can be N nte's after pid and N nte's after pv1. The structure is guaranteed in that all nte that follows pid belongs to pid and all nte following pv1 belongs to pv1. From: <pid> </pid> <nte> <nte-1>1</nte-1> <nte-3>Note 1</nte-1> </nte> <nte></nte> <pv1></pv1> <nte> </nte> into: <pid> <nte> <nte-1>1</nte-1> <nte-3>Note 1</nte-1> </nte> <nte> </nte> </pid> <pv1> <nte> </nte> </pv1> Thanks!

    Read the article

  • Implementing a generic repository for WCF data services

    - by cibrax
    The repository implementation I am going to discuss here is not exactly what someone would call repository in terms of DDD, but it is an abstraction layer that becomes handy at the moment of unit testing the code around this repository. In other words, you can easily create a mock to replace the real repository implementation. The WCF Data Services update for .NET 3.5 introduced a nice feature to support two way data bindings, which is very helpful for developing WPF or Silverlight based application but also for implementing the repository I am going to talk about. As part of this feature, the WCF Data Services Client library introduced a new collection DataServiceCollection<T> that implements INotifyPropertyChanged to notify the data context (DataServiceContext) about any change in the association links. This means that it is not longer necessary to manually set or remove the links in the data context when an item is added or removed from a collection. Before having this new collection, you basically used the following code to add a new item to a collection. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; var context = new OrderContext(); context.AddToOrders(order); context.AddToOrderItems(item); context.SetLink(item, "Order", order); context.SaveChanges(); Now, thanks to this new collection, everything is much simpler and similar to what you have in other ORMs like Entity Framework or L2S. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; order.Items.Add(item); var context = new OrderContext(); context.AddToOrders(order); context.SaveChanges(); In order to use this new feature, you first need to enable V2 in the data service, and then use some specific arguments in the datasvcutil tool (You can find more information about this new feature and how to use it in this post). DataSvcUtil /uri:"http://localhost:3655/MyDataService.svc/" /out:Reference.cs /dataservicecollection /version:2.0 Once you use those two arguments, the generated proxy classes will use DataServiceCollection<T> rather than a simple ObjectCollection<T>, which was the default collection in V1. There are some aspects that you need to know to use this feature correctly. 1. All the entities retrieved directly from the data context with a query track the changes and report those to the data context automatically. 2. A entity created with “new” does not track any change in the properties or associations. In order to enable change tracking in this entity, you need to do the following trick. public Order CreateOrder() {   var collection = new DataServiceCollection<Order>(this.context);   var order = new Order();   collection.Add(order);   return order; } You basically need to create a collection, and add the entity to that collection with the “Add” method to enable change tracking on that entity. 3. If you need to attach an existing entity (For example, if you created the entity with the “new” operator rather than retrieving it from the data context with a query) to a data context for tracking changes, you can use the “Load” method in the DataServiceCollection. var order = new Order {   Id = 1 }; var collection = new DataServiceCollection<Order>(this.context); collection.Load(order); In this case, the order with Id = 1 must exist on the data source exposed by the Data service. Otherwise, you will get an error because the entity did not exist. These cool extensions methods discussed by Stuart Leeks in this post to replace all the magic strings in the “Expand” operation with Expression Trees represent another feature I am going to use to implement this generic repository. Thanks to these extension methods, you could replace the following query with magic strings by a piece of code that only uses expressions. Magic strings, var customers = dataContext.Customers .Expand("Orders")         .Expand("Orders/Items") Expressions, var customers = dataContext.Customers .Expand(c => c.Orders.SubExpand(o => o.Items)) That query basically returns all the customers with their orders and order items. Ok, now that we have the automatic change tracking support and the expression support for explicitly loading entity associations, we are ready to create the repository. The interface for this repository looks like this,public interface IRepository { T Create<T>() where T : new(); void Update<T>(T entity); void Delete<T>(T entity); IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties); IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties); void Attach<T>(T entity); void SaveChanges(); } The Retrieve and RetrieveAll methods are used to execute queries against the data service context. While both methods receive an array of expressions to load associations explicitly, only the Retrieve method receives a predicate representing the “where” clause. The following code represents the final implementation of this repository.public class DataServiceRepository: IRepository { ResourceRepositoryContext context; public DataServiceRepository() : this (new DataServiceContext()) { } public DataServiceRepository(DataServiceContext context) { this.context = context; } private static string ResolveEntitySet(Type type) { var entitySetAttribute = (EntitySetAttribute)type.GetCustomAttributes(typeof(EntitySetAttribute), true).FirstOrDefault(); if (entitySetAttribute != null) return entitySetAttribute.EntitySet; return null; } public T Create<T>() where T : new() { var collection = new DataServiceCollection<T>(this.context); var entity = new T(); collection.Add(entity); return entity; } public void Update<T>(T entity) { this.context.UpdateObject(entity); } public void Delete<T>(T entity) { this.context.DeleteObject(entity); } public void Attach<T>(T entity) { var collection = new DataServiceCollection<T>(this.context); collection.Load(entity); } public IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query.Where(predicate); } public IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query; } public void SaveChanges() { this.context.SaveChanges(SaveChangesOptions.Batch); } } For instance, you can use the following code to retrieve customers with First name equal to “John”, and all their orders in a single call. repository.Retrieve<Customer>(    c => c.FirstName == “John”, //Where    c => c.Orders.SubExpand(o => o.Items)); In case, you want to have some pre-defined queries that you are going to use across several places, you can put them in an specific class. public static class CustomerQueries {   public static Expression<Func<Customer, bool>> LastNameEqualsTo(string lastName)   {     return c => c.LastName == lastName;   } } And then, use it with the repository. repository.Retrieve<Customer>(    CustomerQueries.LastNameEqualsTo("foo"),    c => c.Orders.SubExpand(o => o.Items));

    Read the article

  • Quick guide to Oracle IRM 11g: Server installation

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index This is the first of a set of articles designed to assist with the successful installation, configuration and deployment of a document security solution using Oracle IRM. This article goes through a set of simple instructions which detail how to download, install and configure the IRM server, the starting point for building a document security solution. This article contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Downloading the software Preparing a database Creating the schema WebLogic Server installation Installing Oracle IRM Introduction Because we are using Oracle Enterprise Linux in this guide, and before we get into the detail of IRM, i'd like to share some tips with Linux to make life a bit easier.Use a 64bit platform, IRM 11g runs just fine on a 32bit server but with 64bit you will build a more future proof service. Download and install the latest Java JDK package. Make sure you get the 64bit version if you are on a 64bit server. Configure Linux to use a good Yum server to simplify installing packages. For Oracle Enterprise Linux we maintain a great public Yum here. Have at least 20GB of free disk space on the partition you intend to install the IRM server. The downloads are big, then you extract them and then install. This quickly consumes disk space which you can easily recover by deleting the downloaded and extracted files after wards. But it's nice to have the disk space spare to keep these around in case you need to restart any part of the installation process again. Downloading the software OK, so before you can do anything, you need the software install kits. Luckily Oracle allows you to freely download every technology we create. You'll need to get the following; Oracle WebLogic Server Oracle Database Oracle Repository Creation Utility (rcu) Oracle IRM server You can use Microsoft SQL server 2005 or 2008, in this guide i've used Oracle RDBMS 11gR2 for Linux. Preparing the database I'm not going to go through the finer points of installing the database. There are many very good guides on installing the Oracle Database. However one thing I would suggest you think about is enabling TDE, network encryption and using Database Vault. These Oracle database security technologies are excellent for creating a complete end to end security solution. No point in going to all the effort to secure document access with IRM when someone can go directly to the database and assign themselves rights to documents. To understand this further, you can see a video of the IRM service using these database security technologies here. With a database up and running we need to create a schema to hold the IRM data. This schema contains the rights model, cryptographic keys, user account id's and associated rights etc. Creating the IRM database schema Oracle uses the Repository Creation Tool which builds your schema, extract the files from the rcu zip. Then in a terminal window; cd /oracle/install/rcu/bin ./rcu This will launch the Repository Creation Tool and you will be presented with the image to the right. Hit next and continue onto the next dialog. You are asked if you are going to be creating a new schema or wish to drop an existing one, you obviously just need to click next at this point to create a new schema. The RCU next needs to know where your database is so you'll need the following details of your database instance. Below, for reference, is the information for my installation. Hostname: irm.oracle.demo Port: 1521 (This is the default TCP port for the Oracle Database) Service Name: irm.oracle.demo. Note this is not the SID, but the service name. Username: sys Password: ******** Role: SYSDBA And then select next. Because the RCU contains schemas for many of the Oracle Technologies, you now need to select to just deploy the Oracle IRM schema. Open the section under "Enterprise Content Management" and tick the "Oracle Information Rights Management" component. Note that you also get the chance to select a prefix which defaults to "DEV" (for development). I usually change this to something that reflects my own install. PROD for a production system, INT for internal only etc. The next step asks for the passwords for the schema users. We are only creating one schema here so you just enter one password. Some brave souls store this password in an Excel spreadsheet which is then secure against the IRM server you're about to install in this guide. Nearing the end of the schema creation is the mapping of the tablespaces to the schema. Note I had setup a table space already that was encrypted using TDE and at this point I was able to select that tablespace by clicking in the "Default Tablespace" column. The next dialog confirms your actions and clicking on next causes it to create the schema and default data. After this you are presented with the completion summary. WebLogic Server installation The database is now ready and the next step is to install the application server. Oracle IRM 11g is a JEE application and currently only supported in Oracle WebLogic Server. So the next step is get WebLogic Server installed, which is pretty easy. Depending on the version you download, you either run the binary or for a 64 bit platform (like mine) run the following command. java -d64 -jar wls1033_generic.jar And in the resulting dialog hit next to start walking through the install. Next choose a directory into which you will install WebLogic Server. I like to change from the default and install into /oracle/. Then all my software goes into this one folder, all owned by the "oracle" user. The next dialog asks for your Oracle support information to ensure you are kept up to date. If you have an Oracle support account, enter your details but for most evaluation systems I leave these fields blank. Again, for evaluation or development systems, I usually stick with the "Typical" install type which you are next asked for. Next you are asked for the JDK which will be used for the server. When installing from the generic jar on a 64bit platform like in this guide, no JDK is bundled with the installer. But as you can see in the image on the right, that it does a good job of detecting the one you've got installed. Defaults for the install directories are usually taken, no changes here, just click next. And finally we are ready to install, hit next, sit back and relax. Typically this takes about 10 minutes. After the install, do not run the quick start, we need to deploy the IRM install itself from which we will create a new WebLogic domain. For now just hit done and lets move to the final step of the installation process. Installing Oracle IRM The last piece of the puzzle to getting your environment ready is to deploy the IRM files themselves. Unzip the Oracle Enterprise Content Management 11g zip file and it will create a Disk1 directory. Switch to this folder and in the console run ./runInstaller. This will launch the installer which will also ask for the location of the JDK. Look at the image on the right for the detail. You should now see the first stage of the IRM installation. The dialog warns you need to have a WebLogic server installed and have created the schema's, but you've just done all that above (I hope) so we are ready to go. The installer now checks that you have all the required libraries installed and other system parameters are correct. Because nearly all of my development and evaluation installations have the database server on the same system, the installer passes these checks without issue... Next... Now chose where to install the IRM files, you must install into the same Middleware Home as the WebLogic Server installation you just performed. Usually the installer already defaults to this location anyway. I also tend to change the Oracle Home Directory to Oracle_IRM so it's clear this is just an IRM install. The summary page tells you about space needed to deploy the files. Unfortunately the IRM install comes with all of the other Oracle ECM software, you can't just select the IRM files, everything gets deployed to disk and uses 1.6GB of space! Not fun, but Oracle has to package up similar technologies otherwise we would have a very large number of installers to QA and manage, again, not fun. Hit Install, time for another drink, maybe a piece of cake or a donut... on a half decent system this part of the install took under 10 minutes. Finally the installation of your IRM server is complete, click on finish and the next phase is to create the WebLogic domain and start configuring your server. Now move onto the next article in this guide... configuring your IRM server ready to seal your first document.

    Read the article

  • Inside the Concurrent Collections: ConcurrentBag

    - by Simon Cooper
    Unlike the other concurrent collections, ConcurrentBag does not really have a non-concurrent analogy. As stated in the MSDN documentation, ConcurrentBag is optimised for the situation where the same thread is both producing and consuming items from the collection. We'll see how this is the case as we take a closer look. Again, I recommend you have ConcurrentBag open in a decompiler for reference. Thread Statics ConcurrentBag makes heavy use of thread statics - static variables marked with ThreadStaticAttribute. This is a special attribute that instructs the CLR to scope any values assigned to or read from the variable to the executing thread, not globally within the AppDomain. This means that if two different threads assign two different values to the same thread static variable, one value will not overwrite the other, and each thread will see the value they assigned to the variable, separately to any other thread. This is a very useful function that allows for ConcurrentBag's concurrency properties. You can think of a thread static variable: [ThreadStatic] private static int m_Value; as doing the same as: private static Dictionary<Thread, int> m_Values; where the executing thread's identity is used to automatically set and retrieve the corresponding value in the dictionary. In .NET 4, this usage of ThreadStaticAttribute is encapsulated in the ThreadLocal class. Lists of lists ConcurrentBag, at its core, operates as a linked list of linked lists: Each outer list node is an instance of ThreadLocalList, and each inner list node is an instance of Node. Each outer ThreadLocalList is owned by a particular thread, accessible through the thread local m_locals variable: private ThreadLocal<ThreadLocalList<T>> m_locals It is important to note that, although the m_locals variable is thread-local, that only applies to accesses through that variable. The objects referenced by the thread (each instance of the ThreadLocalList object) are normal heap objects that are not specific to any thread. Thinking back to the Dictionary analogy above, if each value stored in the dictionary could be accessed by other means, then any thread could access the value belonging to other threads using that mechanism. Only reads and writes to the variable defined as thread-local are re-routed by the CLR according to the executing thread's identity. So, although m_locals is defined as thread-local, the m_headList, m_nextList and m_tailList variables aren't. This means that any thread can access all the thread local lists in the collection by doing a linear search through the outer linked list defined by these variables. Adding items So, onto the collection operations. First, adding items. This one's pretty simple. If the current thread doesn't already own an instance of ThreadLocalList, then one is created (or, if there are lists owned by threads that have stopped, it takes control of one of those). Then the item is added to the head of that thread's list. That's it. Don't worry, it'll get more complicated when we account for the other operations on the list! Taking & Peeking items This is where it gets tricky. If the current thread's list has items in it, then it peeks or removes the head item (not the tail item) from the local list and returns that. However, if the local list is empty, it has to go and steal another item from another list, belonging to a different thread. It iterates through all the thread local lists in the collection using the m_headList and m_nextList variables until it finds one that has items in it, and it steals one item from that list. Up to this point, the two threads had been operating completely independently. To steal an item from another thread's list, the stealing thread has to do it in such a way as to not step on the owning thread's toes. Recall how adding and removing items both operate on the head of the thread's linked list? That gives us an easy way out - a thread trying to steal items from another thread can pop in round the back of another thread's list using the m_tail variable, and steal an item from the back without the owning thread knowing anything about it. The owning thread can carry on completely independently, unaware that one of its items has been nicked. However, this only works when there are at least 3 items in the list, as that guarantees there will be at least one node between the owning thread performing operations on the list head and the thread stealing items from the tail - there's no chance of the two threads operating on the same node at the same time and causing a race condition. If there's less than three items in the list, then there does need to be some synchronization between the two threads. In this case, the lock on the ThreadLocalList object is used to mediate access to a thread's list when there's the possibility of contention. Thread synchronization In ConcurrentBag, this is done using several mechanisms: Operations performed by the owner thread only take out the lock when there are less than three items in the collection. With three or greater items, there won't be any conflict with a stealing thread operating on the tail of the list. If a lock isn't taken out, the owning thread sets the list's m_currentOp variable to a non-zero value for the duration of the operation. This indicates to all other threads that there is a non-locked operation currently occuring on that list. The stealing thread always takes out the lock, to prevent two threads trying to steal from the same list at the same time. After taking out the lock, the stealing thread spinwaits until m_currentOp has been set to zero before actually performing the steal. This ensures there won't be a conflict with the owning thread when the number of items in the list is on the 2-3 item borderline. If any add or remove operations are started in the meantime, and the list is below 3 items, those operations try to take out the list's lock and are blocked until the stealing thread has finished. This allows a thread to steal an item from another thread's list without corrupting it. What about synchronization in the collection as a whole? Collection synchronization Any thread that operates on the collection's global structure (accessing anything outside the thread local lists) has to take out the collection's global lock - m_globalListsLock. This single lock is sufficient when adding a new thread local list, as the items inside each thread's list are unaffected. However, what about operations (such as Count or ToArray) that need to access every item in the collection? In order to ensure a consistent view, all operations on the collection are stopped while the count or ToArray is performed. This is done by freezing the bag at the start, performing the global operation, and unfreezing at the end: The global lock is taken out, to prevent structural alterations to the collection. m_needSync is set to true. This notifies all the threads that they need to take out their list's lock irregardless of what operation they're doing. All the list locks are taken out in order. This blocks all locking operations on the lists. The freezing thread waits for all current lockless operations to finish by spinwaiting on each m_currentOp field. The global operation can then be performed while the bag is frozen, but no other operations can take place at the same time, as all other threads are blocked on a list's lock. Then, once the global operation has finished, the locks are released, m_needSync is unset, and normal concurrent operation resumes. Concurrent principles That's the essence of how ConcurrentBag operates. Each thread operates independently on its own local list, except when they have to steal items from another list. When stealing, only the stealing thread is forced to take out the lock; the owning thread only has to when there is the possibility of contention. And a global lock controls accesses to the structure of the collection outside the thread lists. Operations affecting the entire collection take out all locks in the collection to freeze the contents at a single point in time. So, what principles can we extract here? Threads operate independently Thread-static variables and ThreadLocal makes this easy. Threads operate entirely concurrently on their own structures; only when they need to grab data from another thread is there any thread contention. Minimised lock-taking Even when two threads need to operate on the same data structures (one thread stealing from another), they do so in such a way such that the probability of actually blocking on a lock is minimised; the owning thread always operates on the head of the list, and the stealing thread always operates on the tail. Management of lockless operations Any operations that don't take out a lock still have a 'hook' to force them to lock when necessary. This allows all operations on the collection to be stopped temporarily while a global snapshot is taken. Hopefully, such operations will be short-lived and infrequent. That's all the concurrent collections covered. I hope you've found it as informative and interesting as I have. Next, I'll be taking a closer look at ThreadLocal, which I came across while analyzing ConcurrentBag. As you'll see, the operation of this class deserves a much closer look.

    Read the article

  • Data that has been deleted in P6, how is it updated in Analytics

    - by Jeffrey McDaniel
    In P6 Reporting Database 2.0 the ETL process looked to the refrdel table in the P6 PMDB to determine which projects were deleted. The refrdel table could not be cleared out between ETL runs or those deletes would be lost. After the ETL process is run the refrdel can be cleared out. It is important to keep any purging of the refrdel in a consistent cycle so the ETL process can pick up these deletes and process them accordingly.  In P6 Reporting Database 2.2 and higher the Extended Schema is used as the data source. In the Extended Schema, deleted data is filtered out by the views. The Extended Schema services will handle any interaction with the refrdel table, this concern with timing refrdel cleanup and ETL runs is not applicable as of this release. In the Extended Schema tables (ex. TaskX) there can still be deleted data present. The Extended Schema views join on the primary PMDB tables (ex. Task) and filter out any deleted data.  Any data that was deleted that remains in the Extended Schema tables can be cleaned out at a designated time by running the clean up procedure as documented in the P6 Extended Schema white paper. This can be run occasionally but is not necessary to run often unless large amounts of data has been deleted.

    Read the article

  • Is a "model" branch a common practice?

    - by dukeofgaming
    I just thought it could be a good thing to have a dedicated version control branch for all database schema changes and I wanted to know if anyone else is doing the same and what have the results been. Say that you are working with: Schema model/documentation (some file where you model the database visually to generate the schema source, say MySQL Workbench, with a .mwb file, which is binary) Schema source (a .sql file) Schema-based code generation The normal way we were working was with feature branches, so we would do changes to the model files (the database specific ones), and then have to regenerate points 2 and 3, dealing with the possible conflicts (or even code rewriting). Now say that your workflow goes the same way as the previous item numbering. With a model branch you wouldn't have to reconcile the schema model with binaries in other feature branches, or have to regenerate schema source and regenerate code (which might have human code on top of it). It makes so much sense to me it feels weird not having seen this earlier as a common practice. Edit: I'm counting on branch merges to be the assertions for the model matching the code. I use a DVCS, so I don't fear long-lived branches or scary-looking merges. I'm also doing feature branching.

    Read the article

  • Client-server application design issue

    - by user2547823
    I have a collection of clients on server's side. And there are some objects that need to work with that collection - adding and removing clients, sending message to them, updating connection settings and so on. They should perform these actions simultaneously, so mutex or another synchronization primitive is required. I want to share one instance of collection between these objects, but all of them require access to private fields of collection. I hope that code sample makes it more clear[C++]: class Collection { std::vector< Client* > clients; Mutex mLock; ... } class ClientNotifier { void sendMessage() { mLock.lock(); // loop over clients and send message to each of them } } class ConnectionSettingsUpdater { void changeSettings( const std::string& name ) { mLock.lock(); // if client with this name is inside collection, change its settings } } As you can see, all these classes require direct access to Collection's private fields. Can you give me an advice about how to implement such behaviour correctly, i.e. keeping Collection's interface simple without it knowing about its users?

    Read the article

  • Static vs. dynamic memory allocation - lots of constant objects, only small part of them used at runtime

    - by k29
    Here are two options: Option 1: enum QuizCategory { CATEGORY_1(new MyCollection<Question>() .add(Question.QUESTION_A) .add(Question.QUESTION_B) .add...), CATEGORY_2(new MyCollection<Question>() .add(Question.QUESTION_B) .add(Question.QUESTION_C) .add...), ... ; public MyCollection<Question> collection; private QuizCategory(MyCollection<Question> collection) { this.collection = collection; } public Question getRandom() { return collection.getRandomQuestion(); } } Option 2: enum QuizCategory2 { CATEGORY_1 { @Override protected MyCollection<Question> populateWithQuestions() { return new MyCollection<Question>() .add(Question.QUESTION_A) .add(Question.QUESTION_B) .add...; } }, CATEGORY_2 { @Override protected MyCollection<Question> populateWithQuestions() { return new MyCollection<Question>() .add(Question.QUESTION_B) .add(Question.QUESTION_C) .add...; } }; public Question getRandom() { MyCollection<Question> collection = populateWithQuestions(); return collection.getRandomQuestion(); } protected abstract MyCollection<Question> populateWithQuestions(); } There will be around 1000 categories, each containing 10 - 300 questions (100 on average). At runtime typically only 10 categories and 30 questions will be used. Each question is itself an enum constant (with its fields and methods). I'm trying to decide between those two options in the mobile application context. I haven't done any measurements since I have yet to write the questions and would like to gather more information before committing to one or another option. As far as I understand: (a) Option 1 will perform better since there will be no need to populate the collection and then garbage-collect the questions; (b) Option 1 will require extra memory: 1000 categories x 100 questions x 4 bytes for each reference = 400 Kb, which is not significant. So I'm leaning to Option 1, but just wondered if I'm correct in my assumptions and not missing something important? Perhaps someone has faced a similar dilemma? Or perhaps it doesn't actually matter that much?

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • log4j.xml configuration with <rollingPolicy> and <triggeringPolicy>

    - by Mike Smith
    I try to configure log4j.xml in such a way that file will be rolled upon file size, and the rolled file's name will be i.e: "C:/temp/test/test_log4j-%d{yyyy-MM-dd-HH_mm_ss}.log" I followed this discussion: http://web.archiveorange.com/archive/v/NUYyjJipzkDOS3reRiMz Finally it worked for me only when I add: try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } to the method: public boolean isTriggeringEvent(Appender appender, LoggingEvent event, String filename, long fileLength) which make it works. The question is if there is a better way to make it work? since this method call many times and slow my program. Here is the code: package com.mypack.rolling; import org.apache.log4j.rolling.RollingPolicy; import org.apache.log4j.rolling.RolloverDescription; import org.apache.log4j.rolling.TimeBasedRollingPolicy; /** * Same as org.apache.log4j.rolling.TimeBasedRollingPolicy but acts only as * RollingPolicy and NOT as TriggeringPolicy. * * This allows us to combine this class with a size-based triggering policy * (decision to roll based on size, name of rolled files based on time) * */ public class CustomTimeBasedRollingPolicy implements RollingPolicy { TimeBasedRollingPolicy timeBasedRollingPolicy = new TimeBasedRollingPolicy(); /** * Set file name pattern. * @param fnp file name pattern. */ public void setFileNamePattern(String fnp) { timeBasedRollingPolicy.setFileNamePattern(fnp); } /* public void setActiveFileName(String fnp) { timeBasedRollingPolicy.setActiveFileName(fnp); }*/ /** * Get file name pattern. * @return file name pattern. */ public String getFileNamePattern() { return timeBasedRollingPolicy.getFileNamePattern(); } public RolloverDescription initialize(String file, boolean append) throws SecurityException { return timeBasedRollingPolicy.initialize(file, append); } public RolloverDescription rollover(String activeFile) throws SecurityException { return timeBasedRollingPolicy.rollover(activeFile); } public void activateOptions() { timeBasedRollingPolicy.activateOptions(); } } package com.mypack.rolling; import org.apache.log4j.helpers.OptionConverter; import org.apache.log4j.Appender; import org.apache.log4j.rolling.TriggeringPolicy; import org.apache.log4j.spi.LoggingEvent; import org.apache.log4j.spi.OptionHandler; /** * Copy of org.apache.log4j.rolling.SizeBasedTriggeringPolicy but able to accept * a human-friendly value for maximumFileSize, eg. "10MB" * * Note that sub-classing SizeBasedTriggeringPolicy is not possible because that * class is final */ public class CustomSizeBasedTriggeringPolicy implements TriggeringPolicy, OptionHandler { /** * Rollover threshold size in bytes. */ private long maximumFileSize = 10 * 1024 * 1024; // let 10 MB the default max size /** * Set the maximum size that the output file is allowed to reach before * being rolled over to backup files. * * <p> * In configuration files, the <b>MaxFileSize</b> option takes an long * integer in the range 0 - 2^63. You can specify the value with the * suffixes "KB", "MB" or "GB" so that the integer is interpreted being * expressed respectively in kilobytes, megabytes or gigabytes. For example, * the value "10KB" will be interpreted as 10240. * * @param value * the maximum size that the output file is allowed to reach */ public void setMaxFileSize(String value) { maximumFileSize = OptionConverter.toFileSize(value, maximumFileSize + 1); } public long getMaximumFileSize() { return maximumFileSize; } public void setMaximumFileSize(long maximumFileSize) { this.maximumFileSize = maximumFileSize; } public void activateOptions() { } public boolean isTriggeringEvent(Appender appender, LoggingEvent event, String filename, long fileLength) { try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } boolean result = (fileLength >= maximumFileSize); return result; } } and the log4j.xml: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"> <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="true"> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c -> %m%n" /> </layout> </appender> <appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender"> <param name="file" value="C:/temp/test/test_log4j.log" /> <rollingPolicy class="com.mypack.rolling.CustomTimeBasedRollingPolicy"> <param name="fileNamePattern" value="C:/temp/test/test_log4j-%d{yyyy-MM-dd-HH_mm_ss}.log" /> </rollingPolicy> <triggeringPolicy class="com.mypack.rolling.CustomSizeBasedTriggeringPolicy"> <param name="MaxFileSize" value="200KB" /> </triggeringPolicy> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c -> %m%n" /> </layout> </appender> <logger name="com.mypack.myrun" additivity="false"> <level value="debug" /> <appender-ref ref="FILE" /> </logger> <root> <priority value="debug" /> <appender-ref ref="console" /> </root> </log4j:configuration>

    Read the article

  • How to convert objects in reflection (Linq to XML)

    - by user829174
    I have the following code which is working fine, the code creates plugins in reflection via run time: using System; using System.Collections.Generic; using System.Linq; using System.IO; using System.Xml.Linq; using System.Reflection; using System.Text; namespace WindowsFormsApplication1 { public abstract class Plugin { public string Type { get; set; } public string Message { get; set; } } public class FilePlugin : Plugin { public string Path { get; set; } } public class RegsitryPlugin : Plugin { public string Key { get; set; } public string Name { get; set; } public string Value { get; set; } } static class MyProgram { [STAThread] static void Main(string[] args) { string xmlstr =@" <Client> <Plugin Type=""FilePlugin""> <Message>i am a file plugin</Message> <Path>c:\</Path> </Plugin> <Plugin Type=""RegsitryPlugin""> <Message>i am a registry plugin</Message> <Key>HKLM\Software\Microsoft</Key> <Name>Version</Name> <Value>3.5</Value> </Plugin> </Client> "; Assembly asm = Assembly.GetExecutingAssembly(); XDocument xDoc = XDocument.Load(new StringReader(xmlstr)); Plugin[] plugins = xDoc.Descendants("Plugin") .Select(plugin => { string typeName = plugin.Attribute("Type").Value; var type = asm.GetTypes().Where(t => t.Name == typeName).First(); Plugin p = Activator.CreateInstance(type) as Plugin; p.Type = typeName; foreach (var prop in plugin.Descendants()) { var pi = type.GetProperty(prop.Name.LocalName); object newVal = Convert.ChangeType(prop.Value, pi.PropertyType); pi.SetValue(p, newVal, null); } return p; }).ToArray(); // //"plugins" ready to use // } } } I am trying to modify the code and adding new class named DetailedPlugin so it will be able to read the following xml: string newXml = @" <Client> <Plugin Type=""FilePlugin""> <Message>i am a file plugin</Message> <Path>c:\</Path> <DetailedPlugin Type=""DetailedFilePlugin""> <Message>I am a detailed file plugin</Message> </DetailedPlugin> </Plugin> <Plugin Type=""RegsitryPlugin""> <Message>i am a registry plugin</Message> <Key>HKLM\Software\Microsoft</Key> <Name>Version</Name> <Value>3.5</Value> <DetailedPlugin Type=""DetailedRegsitryPlugin""> <Message>I am a detailed registry plugin</Message> </DetailedPlugin> </Plugin> </Client> "; for this i modified my classes to the following: public abstract class Plugin { public string Type { get; set; } public string Message { get; set; } public DetailedPlugin DetailedPlugin { get; set; } // new } public class FilePlugin : Plugin { public string Path { get; set; } } public class RegsitryPlugin : Plugin { public string Key { get; set; } public string Name { get; set; } public string Value { get; set; } } // new classes: public abstract class DetailedPlugin { public string Type { get; set; } public string Message { get; set; } } public class DetailedFilePlugin : Plugin { public string ExtraField1 { get; set; } } public class DetailedRegsitryPlugin : Plugin { public string ExtraField2{ get; set; } } from here i need some help to accomplish reading the xml and create the plugins with the nested DetailedPlugin

    Read the article

  • Linq to List and IEnumerable issues

    - by Otaku
    I am querying an HTML file with Linq. It looks something like this: <html> <body> <div class="Players"> <div class="role">Goalies</div> <div class="name">John Smith</div> <div class="name">Shawn Xie</div> <div class="role">Right Wings</div> <div class="name">Jack Davis</div> <div class="name">Carl Yuns</div> <div class="name">Wayne Gortonia</div> <div class="role">Centers</div> <div class="name">Lutz Gaspy</div> <div class="name">John Jacobs</div> </div </html> </body> What I'm trying to do is create a list of these folks like in a list of a structure called Players: Structure Players Public Name As String Public Position As String End Structure But I've quickly found out I don't really know what I'm doing when it comes to Linq. I've got this far my my queries: Dim goalieList = From d In player.Elements _ Where d.Value = "Goalies" _ Select From g In d.ElementsAfterSelf _ Take While (g.@class <> "role") _ Select New Players With {.Position = "Goalie", _ .Name = g.Value} Dim centersList = From d In player.Elements _ Where d.Value = "Centers" _ Select From g In d.ElementsAfterSelf _ Take While (g.@class <> "role") _ Select New Players With {.Position = "Centers", _ .Name = g.Value} Which gets me down to the the players by position, but then I can't do much with this afterwards the result type is System.Collections.Generic.IEnumerable(Of System.Collections.Generic.IEnumerable(Of Player)) What I want to do is add these two results to a new list, like: Dim playersList As List(Of Players) = Nothing playersList.AddRange(centersList) playersList.AddRange(goalieList) So that I can then query the list and use it. But it kicks the error: Unable to cast object of type 'WhereSelectEnumerableIterator2[System.Xml.Linq.XElement,System.Collections.Generic.IEnumerable1[Players]]' to type 'System.Collections.Generic.IEnumerable`1[Players]' As you can see, I may really have no idea how to work with all these objects/classes. Does anyone have any insight on what I may be doing wrong and how I can resolve it? RESOLVED: The Linq query needs to return a single iEnumerable, like this: Dim goalieList = From l In _ (From d In players.Elements _ Where d.Value = "Goalies" _ Select d.ElementsAfterSelf.TakeWhile(Function(f) f.@class <> "role")) _ Select New Players With {.Position = "Goalie", .Name = l.Value} and then use goalieList.ToList

    Read the article

  • PayPal - CreateRecurringPaymentsProfile - is this request valid?

    - by PHP thinker
    I sending this request to make a recurring paypent (SOAP request to SandBox), but in response I get error message about missing token and other fields invalid (Missing Token or payment source). What could be wrong with this CreateRecurringPaymentsProfile request? <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:xsi="http://www.w3.org/1999/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/1999/XMLSchema" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <SOAP-ENV:Header> <RequesterCredentials xmlns="urn:ebay:api:PayPalAPI" SOAP-ENV:mustUnderstand="1"> <Credentials xmlns="urn:ebay:apis:eBLBaseComponents"> <Username>xxxxx_biz_api1.gmail.com</Username> <Password>xxxxxxx</Password> <Subject/> </Credentials> </RequesterCredentials> </SOAP-ENV:Header> <SOAP-ENV:Body> <CreateRecurringPaymentsProfileReq xmlns="urn:ebay:api:PayPalAPI"> <CreateRecurringPaymentsProfileRequest> <Version xmlns="urn:ebay:apis:eBLBaseComponents" xsi:type="xsd:string">58.0</Version> <CreateRecurringPaymentsProfileRequestDetails> <RecurringPaymentsProfileDetails xmlns="urn:ebay:apis:eBLBaseComponents"> <BillingStartDate></BillingStartDate> </RecurringPaymentsProfileDetails> <ScheduleDetails> <Description>Must match</Description> <PaymentPeriod> <BillingPeriod>Day</BillingPeriod> <BillingFrequency>1</BillingFrequency> <Amount>39.95</Amount> </PaymentPeriod> </ScheduleDetails> <Token>EC-480620864W522011V</Token> </CreateRecurringPaymentsProfileRequestDetails> </CreateRecurringPaymentsProfileRequest> </CreateRecurringPaymentsProfileReq> </SOAP-ENV:Body> P.S. I am sending this request the correct way, after "Doexpresscheckout" command.

    Read the article

  • Svcutil generating bad config with multiple endpoints

    - by vfilby
    I have a WCF service that has exposed a soap and an xml endpoint. When I use svcutil to generate the proxy code on the client side the generated configuration contains two endpoints which causes the client to fail. If I edit the web.config file and remove the second endpoint (with the custom binding) all works as expected. Is there a way I can get svcutil to generate a config that just works so that I don't need to hand edit the file everytime? Client-side error: An endpoint configuration section for contract 'MyNamespace.ITestService' could not be loaded because more than one endpoint configuration for that contract was found. Please indicate the preferred endpoint configuration section by name. Svcutil command: svcutil http://api.local/Test.svc /reference:bin\MyNamespace.Interface.dll /config:web.config /mergeConfig /out:"Service References\TestService.cs" /n:*,MyNamespace Generated client config: <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_ITestService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> <customBinding> <binding name="CustomBinding_ITestService"> <textMessageEncoding maxReadPoolSize="64" maxWritePoolSize="16" messageVersion="Soap12" writeEncoding="utf-8"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> </textMessageEncoding> </binding> </customBinding> </bindings> <client> <endpoint address="http://api2.local/Test.svc/soap" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_ITestService" contract="MyNamespace.ITestService" name="BasicHttpBinding_ITestService" /> <endpoint binding="customBinding" bindingConfiguration="CustomBinding_ITestService" contract="MyNamespace.ITestService" name="CustomBinding_ITestService" /> </client> </system.serviceModel>

    Read the article

  • XSLT: Splitting none continues Elements/Grouping continues Elements

    - by Gerald
    Hi! Need some help with this problem in implementing with Xslt, I had already implemented a java code of this one using sax parser but it is a troublesome due to customer request to change something... so we are doing it now using an XSLT with don't need to be compiled and deployed to a web server. I have an xml like the one below Example 1: <ShotRows> <ShotRow row="3" col="3" bit="1" position="1"/> <ShotRow row="3" col="4" bit="1" position="2"/> <ShotRow row="3" col="5" bit="1" position="3"/> <ShotRow row="3" col="6" bit="1" position="4"/> <ShotRow row="3" col="7" bit="1" position="5"/> <ShotRow row="3" col="8" bit="1" position="6"/> <ShotRow row="3" col="9" bit="1" position="7"/> <ShotRow row="3" col="10" bit="1" position="8"/> <ShotRow row="3" col="11" bit="1" position="9"/> </ShotRows> Output 1: <ShotRows> <ShotRow row="3" colStart="3" colEnd="11" /> </ShotRows> -- Be cause the col is continues from 3 to 11 Example 2: <ShotRows> <ShotRow row="3" col="3" bit="1" position="1"/> <ShotRow row="3" col="4" bit="1" position="2"/> <ShotRow row="3" col="6" bit="1" position="3"/> <ShotRow row="3" col="7" bit="1" position="4"/> <ShotRow row="3" col="8" bit="1" position="5"/> <ShotRow row="3" col="10" bit="1" position="6"/> <ShotRow row="3" col="11" bit="1" position="7"/> <ShotRow row="3" col="15" bit="1" position="8"/> <ShotRow row="3" col="19" bit="1" position="9"/> </ShotRows> Output 2: <ShotRows> <ShotRow row="3" colStart="3" colEnd="4" /> <ShotRow row="3" colStart="6" colEnd="8" /> <ShotRow row="3" colStart="10" colEnd="11" /> <ShotRow row="3" colStart="15" colEnd="15" /> <ShotRow row="3" colStart="19" colEnd="19" /> </ShotRows> -- Basic idea is to group any continues col into one element, like the col 3 to 4, col 6 to 8, col 10 to 11, col 15 is only one, and col 19 is only one. Thanks in advance. Sincerely, Gerald

    Read the article

  • extract data from Plist to array and dictionary

    - by Boaz
    Hi I made a plist that looks like that: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList 1.0.dtd"> <plist version="1.0"> <array> <array> <dict> <key>Company</key> <string>xxx</string> <key>Title</key> <string>VP Marketing</string> <key>Name</key> <string>Alon ddfr</string> </dict> <dict> <key>Name</key> <string>Adam Ben Shushan</string> <key>Title</key> <string>CEO</string> <key>Company</key> <string>Shushan ltd.</string> </dict> </array> <array> <dict> <key>Company</key> <string>xxx</string> <key>Title</key> <string>CTO</string> <key>Name</key> <string>Boaz frf</string> </dict> </array> </array> </plist> Now I want to extract the data like that (all the 'A' for key "Name" to one section and all the 'B' "Name" to other one): NSString *plistpath = [[NSBundle mainBundle] pathForResource:@"PeopleData" ofType:@"plist"]; NSMutableArray *attendees = [[NSMutableArray alloc] initWithContentsOfFile:plistpath]; listOfPeople = [[NSMutableArray alloc] init];//Add items NSDictionary *indexADict = [NSDictionary dictionaryWithObject:[[attendees objectAtIndex:0] objectForKey:@"Name"] forKey:@"Profiles"]; NSDictionary *indexBDict = [NSDictionary dictionaryWithObject:[[attendees objectAtIndex:1] objectForKey:@"Name"] forKey:@"Profiles"]; [listOfPeople addObject:indexADict]; [listOfPeople addObject:indexBDict]; This in order to view them in sectioned tableView. I know that the problem is here: NSDictionary *indexADict = [NSDictionary dictionaryWithObject:[[attendees objectAtIndex:0] objectForKey:@"Name"] forKey:@"Profiles"]; But I just can't figure how to do it right. Thanks.

    Read the article

  • MemoryStream, XmlTextWriter and Warning 4 CA2202 : Microsoft.Usage

    - by rasx
    The Run Code Analysis command in Visual Studio 2010 Ultimate returns a warning when seeing a certain pattern with MemoryStream and XmlTextWriter. This is the warning: Warning 7 CA2202 : Microsoft.Usage : Object 'ms' can be disposed more than once in method 'KinteWritePages.GetXPathDocument(DbConnection)'. To avoid generating a System.ObjectDisposedException you should not call Dispose more than one time on an object.: Lines: 421 C:\Visual Studio 2010\Projects\Songhay.DataAccess.KinteWritePages\KinteWritePages.cs 421 Songhay.DataAccess.KinteWritePages This is the form: static XPathDocument GetXPathDocument(DbConnection connection) { XPathDocument xpDoc = null; var ms = new MemoryStream(); try { using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { writer.WriteStartDocument(); writer.WriteStartElement("data"); do { while(reader.Read()) { writer.WriteStartElement("item"); for(int i = 0; i < reader.FieldCount; i++) { writer.WriteRaw(String.Format("<{0}>{1}</{0}>", reader.GetName(i), reader[i].ToString())); } writer.WriteFullEndElement(); } } while(reader.NextResult()); writer.WriteFullEndElement(); writer.WriteEndDocument(); writer.Flush(); ms.Position = 0; xpDoc = new XPathDocument(ms); } } } finally { ms.Dispose(); } return xpDoc; } The same kind of warning is produced for this form: XPathDocument xpDoc = null; using(var ms = new MemoryStream()) { using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { //... } } } return xpDoc; By the way, the following form produces another warning: XPathDocument xpDoc = null; var ms = new MemoryStream(); using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { //... } } return xpDoc; The above produces the warning: Warning 7 CA2000 : Microsoft.Reliability : In method 'KinteWritePages.GetXPathDocument(DbConnection)', object 'ms' is not disposed along all exception paths. Call System.IDisposable.Dispose on object 'ms' before all references to it are out of scope. C:\Visual Studio 2010\Projects\Songhay.DataAccess.KinteWritePages\KinteWritePages.cs 383 Songhay.DataAccess.KinteWritePages In addition to the following, what are my options?: Supress warning CA2202. Supress warning CA2000 and hope that Microsoft is disposing of MemoryStream (because Reflector is not showing me the source code). Rewrite my legacy code to recognize the wonderful XDocument and LINQ to XML.

    Read the article

  • Separating positive values from 'zero' values in an XSLT for-each statement

    - by danielle
    I am traversing an XML file (that contains multiple tables) using XSLT. Part of the job of the page is to get the title of each table, and present that title with along with the number of items that table contains (i.e. "Problems (5)"). I am able to get the number of items, but I now need to separate the sections with 0 (zero) items in them, and put them at the bottom of the list of table titles. I'm having trouble with this because the other items with positive numbers need to be left in their original order/not sorted. Here is the code for the list of titles: <ul> <xsl:for-each select="n1:component/n1:structuredBody/n1:component/n1:section/n1:title"> <li style="list-style-type:none;"> <div style = "padding:3px"><a href="#{generate-id(.)}"> <xsl:variable name ="count" select ="count(../n1:entry)"/> <xsl:choose> <xsl:when test = "$count != 0"> <xsl:value-of select="."/> (<xsl:value-of select="$count"/>) </xsl:when> <xsl:otherwise> <div id = "zero"><xsl:value-of select="."/> (<xsl:value-of select="$count"/>)</div> </xsl:otherwise> </xsl:choose> </a> </div> </li> </xsl:for-each> </ul> Right now, the "zero" div just marks each link as gray. Any help regarding how to place the "zero" divs at the bottom of the list would be greatly appreciated. Thank you!

    Read the article

  • help with selecting nodes with xpath

    - by fayer
    my xml structure looks like this: <entity id="1000070"> <name>apple</name> <type>category</type> <entities> <entity id="7002870"> <name>mac</name> <type>category</type> <entities> <entity id="7002907"> <name>leopard</name> <type>sub-category</type> <entities> <entity id="7024080"> <name>safari</name> <type>subject</type> </entity> <entity id="7024701"> <name>finder</name> <type>subject</type> </entity> </entities> </entity> </entities> </entity> <entity id="7024080"> <name>iphone</name> <type>category</type> <entities> <entity id="7024080"> <name>3g</name> <type>sub-category</type> </entity> <entity id="7024701"> <name>3gs</name> <type>sub-category</type> </entity> </entities> </entity> <entity id="7024080"> <name>ipad</name> <type>category</type> </entity> </entities> </entity> currently i have selected all entities with type node that is not category. $xmlDocument-removeNodes("//entity[not(type='category')]") i wonder how i could select all nodes that dont contain type=category OR type=sub-category. i have tried with: $xmlDocument->removeNodes("//entity[not(type='category')] | //entity[not(type='sub-category')]") but it doesnt work!

    Read the article

  • Copy all childNodes to an other element. In javascript native way.

    - by kroko
    Hello I have to change "unknown" contents of XML. The structure and content itself is valid. Original <blabla foo="bar"> <aa>asas</aa> <ff> <cc> <dd /> </cc> </ff> <gg attr2="2"> </gg> ... ... </blabla> becomes <blabla foo="bar"> <magic> <aa>asas</aa> <ff> <cc> <dd /> </cc> </ff> <gg attr2="2"> </gg> ... ... </magic> </blabla> thus, adding a child straight under document root node (document.documentElement) and "pushing" the "original" children under that. Here it has to be done in plain javascript (ecmascript). The idea now is to // Get the root node RootNode = mymagicdoc.documentElement; // Create new magic element (that will contain contents of original root node) var magicContainer = mymagicdoc.createElement("magic"); // Copy all root node children (and their sub tree - deep copy) to magic node /* ????? here RootNodeClone = RootNode.cloneNode(true); RootNodeClone.childNodes...... */ // Remove all children from root node while(RootNode.hasChildNodes()) RootNode.removeChild(RootNode.firstChild); // Now when root node is empty add the magicContainer // node in it that contains all the children of original root node RootNode.appendChild(magicContainer); How to do that /* */ step? Or maybe someone has a much better solution in general for achieving the desirable result? Thank you in advance!

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >