Search Results

Search found 1596 results on 64 pages for 'akshay deep lamba'.

Page 4/64 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Deep clone utility recomendation

    - by Supowski
    Is there any utility for deep cloning for java collections: Arrays Lists Maps NOTE: prefer some solution without usage of serialization, but with use of Object.clone() method. I can be sure that my custom object will implement clone() method and will use only java-standard classes that are cloneable...

    Read the article

  • Generic method to create deep copy of all elements in a collection

    - by bwarner
    I have various ObservableCollections of different object types. I'd like to write a single method that will take a collection of any of these object types and return a new collection where each element is a deep copy of elements in the given collection. Here is an example for a specifc class private static ObservableCollection<PropertyValueRow> DeepCopy(ObservableCollection<PropertyValueRow> list) { ObservableCollection<PropertyValueRow> newList = new ObservableCollection<PropertyValueRow>(); foreach (PropertyValueRow rec in list) { newList.Add((PropertyValueRow)rec.Clone()); } return newList; } How can I make this method generic for any class which implements ICloneable?

    Read the article

  • How do I deep copy a DateTime object?

    - by Billy ONeal
    $date1 = $date2 = new DateTime(); $date2->add(new DateInterval('P3Y')); Now $date1 and $date2 contain the same date -- three years from now. I'd like to create two separate datetimes, one which is parsed from a string and one with three years added to it. Currently I've hacked it up like this: $date2 = new DateTime($date1->format(DateTime::ISO8601)); but that seems like a horrendous hack. Is there a "correct" way to deep copy a DateTime object?

    Read the article

  • Nasty deep nested loop in Rails

    - by CalebHC
    I have this nested loop that goes 4 levels deep to find all the image widgets and calculate their sizes. This seems really inefficient and nasty! I have thought of putting the organization_id in the widget model so I could just call something like organization.widgets.(named_scope), but I feel like that's a bad short cut. Is there a better way? Thanks class Organization < ActiveRecord::Base ... def get_image_widget_total total_size = 0 self.trips.each do |t| t.phases.each do |phase| phase.pages.each do |page| page.widgets.each do |widget| if widget.widget_type == Widget::IMAGE total_size += widget.image_file_size end end end end end return total_size end ... end

    Read the article

  • saving iPhone program state with a deep UINavigationController

    - by jr
    Can someone a good way to save the program state (UINavigationController stack, etc) of an iPhone application. My application obtains a bunch of information from the network and I want to return the person back to the last screen they were on, even if it was 3 or 4 screens deep. I assume that I will need to reload the data from the network along the way as I recreate the UINavigation controllers. I don't necessarily have a problem with this. I'm thinking about maybe having my UINavigationController objects implement some type of protocol which allow me to save/set their state? I'm looking to hear from others who may have needed to implement a similar scenario and how they accomplished it. My application has a UITabbarController at the root and UINavigationController items for each tab bar item. thanks!

    Read the article

  • Deep Null checking, is there a better way?

    - by Mattias Konradsson
    We've all been there, we have some deep property like cake.frosting.berries.loader that we need to check if it's null so there's no exception. The way to do is is to use a short-circuiting if statement if (cake != null && cake.frosting != null && cake.frosting.berries != null) ... This strikes me however as not very elegant, there should perhaps be an easier way to check the entire chain and see if it comes up against a null variable/property. So is it possible using some extension method or would it be a language feature, or is it just a bad idea?

    Read the article

  • deep injection - spring

    - by Bob
    What is the best way (or options) for accessing spring components at layers deep within the application that aren't managed by spring? For example, I have a low level utility POJO class into which I need to autowire/inject a spring component. I'll call it LowLevelHelper. There are multiple classes that use LowLevelHelper - most are layers away from anything that is hooked up with spring. One option would be to make all the layers in to spring components, but that seems like I'm hacking my design to force spring to help me. I have some complex things going on that won't be nearly as clean if I have to @Autowire all the pieces and don't new anything. Another option might be to manually inject the component in the low level class, but I'm not really sure if this is possible or the right solution.

    Read the article

  • How to do manual DI with deep object graphs and many dependencies properly

    - by Fabian
    I believe this questions has been asked in some or the other way but i'm not getting it yet. We do a GWT project and my project leader disallowed to use GIN/Guice as an DI framework (new programmers are not going to understand it, he argued) so I try to do the DI manually. Now I have a problem with deep object graphs. The object hierarchy from the UI looks like this: AppPresenter-DashboardPresenter-GadgetPresenter-GadgetConfigPresenter The GadgetConfigPresenter way down the object hierarchy tree has a few dependencies like CustomerRepository, ProjectRepository, MandatorRepository, etc. So the GadgetPresenter which creates the GadgetConfigPresenter also has these dependencies and so on, up to the entry point of the app which creates the AppPresenter. Is this the way manual DI is supposed to work? doesn't this mean that I create all dependencies at boot time even I don't need them? would a DI framework like GIN/Guice help me here?

    Read the article

  • Getting "stack level too deep" error when deploying with Capistrano, Rails 3.1 ruby 1.9.2

    - by Victor S
    Here is the log for the cap deploy script output around where the error occurs. Anny suggestions why this might be happening? Thanks! [yup.la] executing command [yup.la] sh -c 'cd /srv/www/portrait/releases/20120406051647 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile' ** [out :: yup.la] rake aborted! ** [out :: yup.la] ** [out :: yup.la] stack level too deep ** [out :: yup.la] (in /srv/www/portrait/releases/20120406051647/app/assets/stylesheets/mobile.css.scss) ** [out :: yup.la] ** [out :: yup.la] Tasks: TOP => assets:precompile:primary ** [out :: yup.la] (See full trace by running task with --trace) ** [out :: yup.la] command finished in 30868ms *** [deploy:update_code] rolling back * executing "rm -rf /srv/www/portrait/releases/20120406051647; true" servers: ["yup.la"] [yup.la] executing command [yup.la] sh -c 'rm -rf /srv/www/portrait/releases/20120406051647; true' command finished in 288ms failed: "sh -c 'cd /srv/www/portrait/releases/20120406051647 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile'" on yup.la /Users/victorstan/Sites/portrait ?

    Read the article

  • I get "stack level too deep" error when using a named scope

    - by Brian Roisentul
    I'm using ruby on rails 2.3.8 and when I write the syntax shown below I get the "stack level too deep" error message. The model is called Announcement and the line of the error looks like this: Tag.find(category_id).announcements.published Where published is named_scope :published, :conditions => "announcements.state = 'published'" I use this named scope in many other places and it works fine. What am I doing wrong? (the relationship between Tag and Announcement model is ok because if I remove the ".published" method from that line it works just fine).

    Read the article

  • Backbone inheritance - deep copying

    - by Ed .
    I've seen this question regarding inheritance in Backbone: Backbone.js view inheritance. Useful but doesn't answer my question. The problem I'm experiencing is this: Say I have a class Panel (model in this example); var Panel = Backbone.Model.extend({ defaults : { name : 'my-panel' } }); And then an AdvancedPanel; var AdvancedPanel = Panel.extend({ defaults : { label : 'Click to edit' } }); The following doesn't work: var advancedPanel = new AdvancedPanel(); alert(advancedPanel.get('name')); // Undefined :( JSFiddle here: http://jsfiddle.net/hWmnb/ I guess I can see that I can achieve this myself through some custom extend function that creates a deep copy of the prototype, but this seems like a common thing that people might want from Backbone inheritance, is there a standard way of doing it?

    Read the article

  • Problems with real-valued input deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • Problems with real-valued deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • Deep copy objects from different namespaces

    - by Wasim
    Hi all, I have the following situation: I have class User with the follwing properies : public class User { string user name ; List <Contact> contacts ; List <BookMark> book marks; . . . } I have the same class in a different namespace , with some different properties . BWT , it's the same situation of it's classes (Contact) and (BookMark). I need to make a deep copy of the same properties from the two classes . Actually , I arrive to this situation by having an Entity Framework edmx file . I created the first database (SQL server 2008) from this model . And copied he same edmx file to another project and created the database with SQL CE db. Now I get the first data model objects by WCF service and need to persist them in the local database in my application . The objects are the same but there are some changs because of the modeling issue with a different databse. Do you have any workarround about this assue. Thanks in advance ...

    Read the article

  • Scaling Image to multiple sizes for Deep Zoom

    - by AnthonyWJones
    Lets assume I have a bitmap with a square aspect and width of 2048 pixels. In order to create a set of files need by Silverlight's DeepZoomImageTileSource I need to scale this bitmap to 1024 then to 512 then to 256 etc down to 1 pixel image. There are two, I suspect naive, approaches:- For each image required scale the original full size image to the required size. However it seems excessive to be scaling the full image to the very small sizes. Having scaled from one level to the next discard the original image and scale each sucessive scaled image as the source of the next smaller image. However I suspect that this would generate images in the 256-64 range with poor fidelity than using option 1. Note unlike with the Deep Zoom Composer this tool is expected to act in an on-demand fashion hence it needs to complete in a reasonable timeframe (tops 30 seconds). On the pluse side I'm only creating a single multiscale image not a pyramid of mutliple high-res images. I am outside my comfort zone here, any graphics experts got any advice? Am I wrong about point 2? Is point 1 reasonably performant and I'm worrying about nothing? Option 3?

    Read the article

  • 'Stack level too deep' error in engine-like plugin with globalize

    - by nutsmuggler
    Hello folks. I have built an engine-like plugin thanks to the new features of Rails 2.3. It's a 'Product' module for a CMS, extrapolated from a previously existing (and working) model/controller. The plugin relies on easy_fckeditor and on globalize (description and title field are localised), and I suspect that globalized could be the culprit here... Everything works fine, except for the update action. I get the following error message: (posting just the first lines, all the message is about attribute_methods) stack level too deep /Library/Ruby/Gems/1.8/gems/activerecord-2.3.2/lib/active_record/attribute_methods.rb:64:in `generated_methods?' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.2/lib/active_record/attribute_methods.rb:241:in `method_missing' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.2/lib/active_record/attribute_methods.rb:249:in `method_missing' For referenze, the full error stack is here: http://pastie.org/596546 I've tried to debug eliminating all the input fields, one by one, but I keep getting the error. fckeditor doesn't seem the culprit (error even without fckeditor) This is the action: def update params[:product][:term_ids] ||= [] @product = Product.find(params[:id]) respond_to do |format| if @product.update_attributes(params[:product]) flash[:notice] = t(:Product_was_successfully_updated) format.html { redirect_to products_path } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @product.errors, :status => :unprocessable_entity } end end end As you see it's quite straightforward. Of course I am not hoping someone to solve this question straightaway, I'd just like to have a head up, a suggestion about where to look to solve this issue. Thanks in advance, Davide

    Read the article

  • deep linking in Excel sheets exported to html

    - by pomarc
    hello everybody, I am working on a project where I must export to html a lot of Excel files. This is pretty straightforward using automation and saving as html. The problem is that many of these sheets have links to worksheets of some other files. I must find a way to write a link to a single inner worksheet. When you export a multisheet excel file to html, excel creates a main htm file, a folder named filename_file, and inside this folder it writes down several files: a css, an xml list of files, a file that creates the tab bar and several html files named sheetxxx.htm, each one representing a worksheet. When you open the main file, you can click the menu bar at the bottom which lets you select the appropriate sheet. This is in fact a link, which replaces a frame content with the sheetxxx.htm file. When this file is loaded a javascript function that selects the right tab gets called. The exported files will be published on a web site. I will have to post process each file and replace every link to the other xls files to the matching htm file, finding a way to open the right worksheet. I think that I could add a parameter to the processed htm file link url, such as myfile.htm?sh=sheet002.htm if I want to link to the second worksheet of myfile.htm (ex myfile.xls). After I've exported them, I could inject a simple javascript into each of the main files which, when they are loaded, could retrieve the sh parameter with jQuery (this is easy) and use this to somehow replace the frSheet frame contents (where the sheets get loaded), opening the right inner sheet and not the default sheet (this is what I call deep linking) mimicking what happens when a user clicks on a tab. This last step is missing... :) I am considering different options, such as replacing the source of the $("frSheet") frame after document.ready. I'd like to hear from you any advice on what could be the best way to realize that in your opinion. any help is greately appreciated, many thanks.

    Read the article

  • Using linq2xml to query a single item deep within

    - by BrettRobi
    I'm wondering if there is a robust and graceful way using linq2xml to query an item deep within an xml hierarchy. For example: <?xml version="1.0" encoding="utf-8" ?> <data> <core id="01234"> <field1>some data</field1> <field2>more data</field2> <metadata> <response> <status code="0">Success</status> <content> <job>f1b5c3f8-e6b1-4ae4-905a-a7c5de3f13c6</job> <id>id of the content</id> </content> </response> </metadata> </core> </data> With this xml how do I query the value of without using something like this: var doc = XElement.Load("file.xml"); string id = (string)doc.Element("core") .Element("metadata") .Element("response") .Element("content") .Element("id"); I don't like the above approach because it is error prone (throws exception if any tag is missing in the hierarchy) and honestly ugly. Is there a more robust and graceful approach, perhaps using the sql-like syntax of linq?

    Read the article

  • Delete on a very deep tree

    - by Kathoz
    I am building a suffix trie (unfortunately, no time to properly implement a suffix tree) for a 10 character set. The strings I wish to parse are going to be rather long (up to 1M characters). The tree is constructed without any problems, however, I run into some when I try to free the memory after being done with it. In particularly, if I set up my constructor and destructor to be as such (where CNode.child is a pointer to an array of 10 pointers to other CNodes, and count is a simple unsigned int): CNode::CNode(){ count = 0; child = new CNode* [10]; memset(child, 0, sizeof(CNode*) * 10); } CNode::~CNode(){ for (int i=0; i<10; i++) delete child[i]; } I get a stack overflow when trying to delete the root node. I might be wrong, but I am fairly certain that this is due to too many destructor calls (each destructor calls up to 10 other destructors). I know this is suboptimal both space, and time-wise, however, this is supposed to be a quick-and-dirty solution to a the repeated substring problem. tl;dr: how would one go about freeing the memory occupied by a very deep tree? Thank you for your time.

    Read the article

  • jQuery.extend() not giving deep copy of object formed by constructor

    - by two7s_clash
    I'm trying to use this to clone a complicated Object. The object in question has a property that is an array of other Objects, and each of these have properties of different types, mostly primitives, but a couple further Objects and Arrays. For example, an ellipsed version of what I am trying to clone: var asset = new Assets(); function Assets() { this.values = []; this.sectionObj = Section; this.names = getNames; this.titles = getTitles; this.properties = getProperties; ... this.add = addAsset; function AssetObj(assetValues) { this.name = ""; this.title = ""; this.interface = ""; ... this.protected = false; this.standaloneProtected = true; ... this.chaptersFree = []; this.chaptersUnavailable = []; ... this.mediaOptions = { videoWidth: "", videoHeight: "", downloadMedia: true, downloadMediaExt: "zip" ... } this.chaptersAvailable = []; if (typeof assetValues == "undefined") { return; } for (var name in assetValues) { if (typeof assetValues[name] == "undefined") { this[name] = ""; } else { this[name] = assetValues[name]; } } ... function Asset() { return new AssetObj(); } ... function getProperties() { var propertiesArray = new Array(); for (var property in this.values[0]) { propertiesArray.push(property); } return propertiesArray; } ... function addAsset(assetValues) { var newValues; newValues = new AssetObj(assetValues); this.values.push(newValues); } } When I do var copiedAssets = $.extend(true, {}, assets); copiedAssets.values == [], while assets.values == [Object { name="section_intro", more...}, Object { name="select_textbook", more...}, Object { name="quiz", more...}, 11 more...] When I do var copiedAssets = $.extend( {}, assets); all copiedAssets.values.[X].properties are just pointers to the value in assets. What I want is a true deep copy all the way down. What am I missing? Do I need to write a custom extend function? If so, any recommended patterns?

    Read the article

  • Dozer deep mapping not working

    - by user363900
    I am trying to use dozer to map between classes. I have a source class that looks like this: public class initRequest{ protected String id; protected String[] details } I have a destination class that looks like this: public class initResponse{ protected String id; protected DetailsObject detObj; } public class DetailsObject{ protected List<String> details; } So essentially i want the string in the details array to be populated into the List in the Details object. I have tried a mapping like this: <mapping wildcard="true" > <class-a>com.starwood.valhalla.sgpms.dto.InitiateSynchronizationDTO</class-a> <class-b>com.starwood.valhalla.psi.sgpms.dto.InitiateSynchronizationDTO</class-b> <field> <a is-accessible="true">reservations</a> <b is-accessible="true">reservations.confirmationNum</b> </field> </mapping> But I get this error: Exception in thread "main" net.sf.dozer.util.mapping.MappingException: java.lang.NoSuchFieldException: reservations.confirmationNum at net.sf.dozer.util.mapping.util.MappingUtils.throwMappingException(MappingUtils.java:91) at net.sf.dozer.util.mapping.propertydescriptor.FieldPropertyDescriptor.<init>(FieldPropertyDescriptor.java:43) at net.sf.dozer.util.mapping.propertydescriptor.PropertyDescriptorFactory.getPropertyDescriptor(PropertyDescriptorFactory.java:53) at net.sf.dozer.util.mapping.fieldmap.FieldMap.getDestPropertyDescriptor(FieldMap.java:370) at net.sf.dozer.util.mapping.fieldmap.FieldMap.getDestFieldType(FieldMap.java:103) at net.sf.dozer.util.mapping.util.MappingsParser.processMappings(MappingsParser.java:95) at net.sf.dozer.util.mapping.util.CustomMappingsLoader.load(CustomMappingsLoader.java:77) at net.sf.dozer.util.mapping.DozerBeanMapper.loadCustomMappings(DozerBeanMapper.java:149) at net.sf.dozer.util.mapping.DozerBeanMapper.getMappingProcessor(DozerBeanMapper.java:132) at net.sf.dozer.util.mapping.DozerBeanMapper.map(DozerBeanMapper.java:94) at com.starwood.valhalla.mte.translator.StarguestPMSIntegrationDozerAdapter.adaptInit(StarguestPMSIntegrationDozerAdapter.java:30) How can i map this so that it works?

    Read the article

  • Settings variable values in a Moq Callback() call

    - by Adam Driscoll
    I think I may be a bit confused on the syntax of the Moq Callback methods. When I try to do something like this: IFilter filter = new Filter(); List<IFoo> objects = new List<IFoo> { new Foo(), new Foo() }; IQueryable myFilteredFoos = null; mockObject.Setup(m => m.GetByFilter(It.IsAny<IFilter>())).Callback( (IFilter filter) => myFilteredFoos = filter.FilterCollection(objects)).Returns(myFilteredFoos.Cast<IFooBar>()); This throws a exception because myFilteredFoos is null during the Cast<IFooBar>() call. Is this not working as I expect? I would think FilterCollection would be called and then myFilteredFoos would be non-null and allow for the cast. FilterCollection is not capable of returning a null which draws me to the conclusion it is not being called. Also, when I declare myFilteredFoos like this: Queryable myFilteredFoos; The Return call complains that myFilteredFoos may be used before it is initialized.

    Read the article

  • Programmatically compose images into one grid in Deep Zoom

    - by val
    Folks, the problem: there are individual 1MB images that represent a grid of images that should be placed side-by-side with no overlap and then conveted/exported to deepzoom. Image are named by row and column like image_col_row.jpg. The challenge: the total number of images in thousands, which makes it prohibitive for manual composition and exporting in DZC. I tried to first stitch images programmaticall into one huge image, then to use DZC - no luck cause the exporting chocked with the size. if you tried programmaticall compose and export deepzoom, please point me into the right direction. Thanks, Val

    Read the article

  • Doubts About Core Data NSManagedObject Deep Copy

    - by Jigzat
    Hello everyone, I have a situation where I must copy one NSManagedObject from the main context into an editing context. It sounds unnecessary to most people as I have seen in similar situations described in Stackoverflow but I looks like I need it. In my app there are many views in a tab bar and every view handles different information that is related to the other views. I think I need multiple MOCs since the user may jump from tab to tab and leave unsaved changes in some tab but maybe it saves data in some other tab/view so if that happens the changes in the rest of the views are saved without user consent and in the worst case scenario makes the app crash. For adding new information I got away by using an adding MOC and then merging changes in both MOCs but for editing is not that easy. I saw a similar situation here in Stackoverflow but the app crashes since my data model doesn't seem to use NSMutableSet for the relationships (I don't think I have a many-to-many relationship, just one-to-many) I think it can be modified so I can retrieve the relationships as if they were attributes for (NSString *attr in relationships) { [cloned setValue:[source valueForKey:attr] forKey:attr]; } but I don't know how to merge the changes of the cloned and original objects. I think I could just delete the object from the main context, then merge both contexts and save changes in the main context but I don't know if is the right way to do it. I'm also concerned about database integrity since I'm not sure that the inverse relationships will keep the same reference to the cloned object as if it were the original one. Can some one please enlighten me about this?

    Read the article

  • Overriding deep functions in javascript

    - by PintSizedCat
    I'm quite new to javascript but have undertaken a task to get better aquainted with it. However, I am running into some problems with jQuery. The following javascript is the code that is in a third party jQuery plugin and I would love to be able to override the funFunction() function here to do my own implementation. Is this possible, if so, how can I do it? I've been doing a fair amount of searching and have tried a number of methods for overriding the function using things like: jQuery.blah.funFunction = funtion() { alert("like this"); }; Main code: (function($) { $.extend( { blah: new function() { this.construct = = function(settings) { //Construct... stuff }; function funFunction() { //Function I want to override } } }); })(jQuery); For those further interested I am trying to override tablesorter so that the only way a user can sort a column is in ascending order only. Edit: There is a wordpress installation that uses WP-Table-Reloaded which in turn uses this plugin. I don't want to change the core code for this plugin because if there was ever an update I would then have to make sure that my predecessor knew exactly what I had done. I've been programming for a long time and feel like I should easily be able to pick up javascript whilst also looking at jQuery. I know exactly what I need to do for this, just not how I can override this function.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >