Search Results

Search found 2880 results on 116 pages for 'deep linking'.

Page 8/116 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • This Week in Geek History: Gmail Goes Public, Deep Blue Wins at Chess, and the Birth of Thomas Edison

    - by Jason Fitzpatrick
    Every week we bring you a snapshot of the week in Geek History. This week we’re taking a peek at the public release of Gmail, the first time a computer won against a chess champion, and the birth of prolific inventor Thomas Edison. Gmail Goes Public It’s hard to believe that Gmail has only been around for seven years and that for the first three years of its life it was invite only. In 2007 Gmail dropped the invite only requirement (although they would hold onto the “beta” tag for another two years) and opened its doors for anyone to grab a username @gmail. For what seemed like an entire epoch in internet history Gmail had the slickest web-based email around with constant innovations and features rolling out from Gmail Labs. Only in the last year or so have major overhauls at competitors like Hotmail and Yahoo! Mail brought other services up to speed. Can’t stand reading a Week in Geek History entry without a random fact? Here you go: gmail.com was originally owned by the Garfield franchise and ran a service that delivered Garfield comics to your email inbox. No, we’re not kidding. Deep Blue Proves Itself a Chess Master Deep Blue was a super computer constructed by IBM with the sole purpose of winning chess matches. In 2011 with the all seeing eye of Google and the amazing computational abilities of engines like Wolfram Alpha we simply take powerful computers immersed in our daily lives for granted. The 1996 match against reigning world chest champion Garry Kasparov where in Deep Blue held its own, but ultimately lost, in a  4-2 match shook a lot of people up. What did it mean if something that was considered such an elegant and quintessentially human endeavor such as chess was so easy for a machine? A series of upgrades helped Deep Blue outright win a match against Kasparov in 1997 (seen in the photo above). After the win Deep Blue was retired and disassembled. Parts of Deep Blue are housed in the National Museum of History and the Computer History Museum. Birth of Thomas Edison Thomas Alva Edison was one of the most prolific inventors in history and holds an astounding 1,093 US Patents. He is responsible for outright inventing or greatly refining major innovations in the history of world culture including the phonograph, the movie camera, the carbon microphone used in nearly every telephone well into the 1980s, batteries for electric cars (a notion we’d take over a century to take seriously), voting machines, and of course his enormous contribution to electric distribution systems. Despite the role of scientist and inventor being largely unglamorous, Thomas Edison and his tumultuous relationship with fellow inventor Nikola Tesla have been fodder for everything from books, to comics, to movies, and video games. Other Notable Moments from This Week in Geek History Although we only shine the spotlight on three interesting facts a week in our Geek History column, that doesn’t mean we don’t have space to highlight a few more in passing. This week in Geek History: 1971 – Apollo 14 returns to Earth after third Lunar mission. 1974 – Birth of Robot Chicken creator Seth Green. 1986 – Death of Dune creator Frank Herbert. Goodnight Dune. 1997 – Simpsons becomes longest running animated show on television. Have an interesting bit of geek trivia to share? Shoot us an email to [email protected] with “history” in the subject line and we’ll be sure to add it to our list of trivia. Latest Features How-To Geek ETC Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? Clean Up Google Calendar’s Interface in Chrome and Iron The Rise and Fall of Kramerica? [Seinfeld Video] GNOME Shell 3 Live CDs for OpenSUSE and Fedora Available for Testing Picplz Offers Special FX, Sharing, and Backup of Your Smartphone Pics BUILD! An Epic LEGO Stop Motion Film [VIDEO] The Lingering Glow of Sunset over a Winter Landscape Wallpaper

    Read the article

  • Is deep Java knowledge needed for Android?

    - by MigNix
    Hi, I am C++ developer interested in Android. As I understand the only possibility to develop applications for Android is Java. There is NDK also, but as I can see it is just something like JNI for Java. Is it mandatory to learn Java or to have deep knowledge in Java then try Android SDK, or it would be possible to learn Java while developing for Android. Thank you.

    Read the article

  • Deep clone utility recomendation

    - by Supowski
    Is there any utility for deep cloning for java collections: Arrays Lists Maps NOTE: prefer some solution without usage of serialization, but with use of Object.clone() method. I can be sure that my custom object will implement clone() method and will use only java-standard classes that are cloneable...

    Read the article

  • Linking to libcrypto for Leopard?

    - by adib
    Hi I am using Mac OS X 10.6 SDK and my deployment target is set to Mac OS 10.5. I'm linking to libcrypto (AquaticPrime requires this) and found out that my app doesn't launch on Leopard. The error is dyld: Library not loaded: /usr/lib/libcrypto.0.9.8.dylib I've tried the following workarounds but none of them work: Linking directly to libcrypto.0.9.7.dylib (the 10.6 SDK refuses to link directly with libcrypto.0.9.7.dylib. Copying the 10.5 SDK's version of libcrypto.0.9.7.dylib to the 10.6 lib directory and try t link with it (this time the link process succeeded but in Leopard the app still tries to lookup the non-existent libcrypto.0.9.8.dylib file and thus won't launch). Copying libcrypto.0.9.7.dylib from a Mac OS X 10.5.8 installation and link with it (the link was successful but the app still looks for libcrypto.0.9.8.dylib). Is there a way to link to this library and still use the 10.6 SDK? Thanks.

    Read the article

  • Organization of linking to external libraries in C++

    - by Nicholas Palko
    In a cross-platform (Windows, FreeBSD) C++ project I'm working on, I am making use of two external libraries, Protocol Buffers and ZeroMQ. In both projects, I am tracking the latest development branch, so these libraries are recompiled / replaced often. For a development scenario, where is the best place to keep libprotobuf.{a,lib} and zeromq.{so,dll}? Should I have my build script copy them from their respective project directories into my local project's directory (say MyProjectRoot/lib or MyProjectRoot/bin) before I build my project? This seems preferable to tossing things into /usr/local/lib, as I wouldn't want to replace a system-wide stable version with the latest experimental one. Cmake warns me whenever I specify a relative path for linking, so I would suspect copying is a better solution then relative linking? Is this the best approach? Thanks for your help!

    Read the article

  • long waiting time in linking

    - by ccanan
    Hi, here is the situation. I am using visual studio 2005. the solution contains lots of projects, 34 projects in all, and the start up projects depends on others. then in linking part, it'll wait a long time before the real linking starts. I am pretty sure it's because of too many projects depended, as when I use a solution with 10 of the 34 projects(keep other projects as headers&libs), it'll start instantly. so any one has any idea that I can reduce the waiting time? thx.

    Read the article

  • Visual C++ 2008: Finding the cause of slow link times

    - by ckarras
    I have a legacy C++ project that takes an annoyingly long time to build (several minutes, even for small incremental changes), and I found most of the time was spent linking. The project is already using precompiled headers and incremental compilation. I have enabled the "/time" command line parameter in the hope I would get more details about what is slowing the linker, and got the following output: 1>Linking... 1> MD Merge: Total time = 59.938s 1> Generate Transitions: Total time = 0.500s 1> MD Finalize: Total time = 7.328s 1>Pass 1: Interval #1, time = 71.718s 1>Pass 2: Interval #2, time = 8.969s 1>Final: Total time = 80.687s 1>Final: Total time = 80.953s Is there a way to get more details about each of these steps? For example, I would like to find if they are spending most time linking to a specific .lib or .obj file. Also, is there any documentation that explains what each of these steps do?

    Read the article

  • Is it possible to wrap calls to statically linked 3rd party library?

    - by robusta
    Hi, I would like to trace calls to some 3rd party library which are made from another 3rd party library. Example: I want to trace calls to library A. My application statically links library B, which in turn is statically linked to library A. In case of dynamic linking I could write library A2 with wrappers for functions which I want to trace of library A and use LD_PRELOAD=A2.so. Then, my wrappers will be called instead, and I will see the trace. In my case I cannot use dynamic linking. Is it possible to achieve the same using static linking? Thanks, Robusta

    Read the article

  • Generic method to create deep copy of all elements in a collection

    - by bwarner
    I have various ObservableCollections of different object types. I'd like to write a single method that will take a collection of any of these object types and return a new collection where each element is a deep copy of elements in the given collection. Here is an example for a specifc class private static ObservableCollection<PropertyValueRow> DeepCopy(ObservableCollection<PropertyValueRow> list) { ObservableCollection<PropertyValueRow> newList = new ObservableCollection<PropertyValueRow>(); foreach (PropertyValueRow rec in list) { newList.Add((PropertyValueRow)rec.Clone()); } return newList; } How can I make this method generic for any class which implements ICloneable?

    Read the article

  • How do I deep copy a DateTime object?

    - by Billy ONeal
    $date1 = $date2 = new DateTime(); $date2->add(new DateInterval('P3Y')); Now $date1 and $date2 contain the same date -- three years from now. I'd like to create two separate datetimes, one which is parsed from a string and one with three years added to it. Currently I've hacked it up like this: $date2 = new DateTime($date1->format(DateTime::ISO8601)); but that seems like a horrendous hack. Is there a "correct" way to deep copy a DateTime object?

    Read the article

  • Nasty deep nested loop in Rails

    - by CalebHC
    I have this nested loop that goes 4 levels deep to find all the image widgets and calculate their sizes. This seems really inefficient and nasty! I have thought of putting the organization_id in the widget model so I could just call something like organization.widgets.(named_scope), but I feel like that's a bad short cut. Is there a better way? Thanks class Organization < ActiveRecord::Base ... def get_image_widget_total total_size = 0 self.trips.each do |t| t.phases.each do |phase| phase.pages.each do |page| page.widgets.each do |widget| if widget.widget_type == Widget::IMAGE total_size += widget.image_file_size end end end end end return total_size end ... end

    Read the article

  • saving iPhone program state with a deep UINavigationController

    - by jr
    Can someone a good way to save the program state (UINavigationController stack, etc) of an iPhone application. My application obtains a bunch of information from the network and I want to return the person back to the last screen they were on, even if it was 3 or 4 screens deep. I assume that I will need to reload the data from the network along the way as I recreate the UINavigation controllers. I don't necessarily have a problem with this. I'm thinking about maybe having my UINavigationController objects implement some type of protocol which allow me to save/set their state? I'm looking to hear from others who may have needed to implement a similar scenario and how they accomplished it. My application has a UITabbarController at the root and UINavigationController items for each tab bar item. thanks!

    Read the article

  • Deep Null checking, is there a better way?

    - by Mattias Konradsson
    We've all been there, we have some deep property like cake.frosting.berries.loader that we need to check if it's null so there's no exception. The way to do is is to use a short-circuiting if statement if (cake != null && cake.frosting != null && cake.frosting.berries != null) ... This strikes me however as not very elegant, there should perhaps be an easier way to check the entire chain and see if it comes up against a null variable/property. So is it possible using some extension method or would it be a language feature, or is it just a bad idea?

    Read the article

  • deep injection - spring

    - by Bob
    What is the best way (or options) for accessing spring components at layers deep within the application that aren't managed by spring? For example, I have a low level utility POJO class into which I need to autowire/inject a spring component. I'll call it LowLevelHelper. There are multiple classes that use LowLevelHelper - most are layers away from anything that is hooked up with spring. One option would be to make all the layers in to spring components, but that seems like I'm hacking my design to force spring to help me. I have some complex things going on that won't be nearly as clean if I have to @Autowire all the pieces and don't new anything. Another option might be to manually inject the component in the low level class, but I'm not really sure if this is possible or the right solution.

    Read the article

  • How to do manual DI with deep object graphs and many dependencies properly

    - by Fabian
    I believe this questions has been asked in some or the other way but i'm not getting it yet. We do a GWT project and my project leader disallowed to use GIN/Guice as an DI framework (new programmers are not going to understand it, he argued) so I try to do the DI manually. Now I have a problem with deep object graphs. The object hierarchy from the UI looks like this: AppPresenter-DashboardPresenter-GadgetPresenter-GadgetConfigPresenter The GadgetConfigPresenter way down the object hierarchy tree has a few dependencies like CustomerRepository, ProjectRepository, MandatorRepository, etc. So the GadgetPresenter which creates the GadgetConfigPresenter also has these dependencies and so on, up to the entry point of the app which creates the AppPresenter. Is this the way manual DI is supposed to work? doesn't this mean that I create all dependencies at boot time even I don't need them? would a DI framework like GIN/Guice help me here?

    Read the article

  • Getting "stack level too deep" error when deploying with Capistrano, Rails 3.1 ruby 1.9.2

    - by Victor S
    Here is the log for the cap deploy script output around where the error occurs. Anny suggestions why this might be happening? Thanks! [yup.la] executing command [yup.la] sh -c 'cd /srv/www/portrait/releases/20120406051647 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile' ** [out :: yup.la] rake aborted! ** [out :: yup.la] ** [out :: yup.la] stack level too deep ** [out :: yup.la] (in /srv/www/portrait/releases/20120406051647/app/assets/stylesheets/mobile.css.scss) ** [out :: yup.la] ** [out :: yup.la] Tasks: TOP => assets:precompile:primary ** [out :: yup.la] (See full trace by running task with --trace) ** [out :: yup.la] command finished in 30868ms *** [deploy:update_code] rolling back * executing "rm -rf /srv/www/portrait/releases/20120406051647; true" servers: ["yup.la"] [yup.la] executing command [yup.la] sh -c 'rm -rf /srv/www/portrait/releases/20120406051647; true' command finished in 288ms failed: "sh -c 'cd /srv/www/portrait/releases/20120406051647 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile'" on yup.la /Users/victorstan/Sites/portrait ?

    Read the article

  • I get "stack level too deep" error when using a named scope

    - by Brian Roisentul
    I'm using ruby on rails 2.3.8 and when I write the syntax shown below I get the "stack level too deep" error message. The model is called Announcement and the line of the error looks like this: Tag.find(category_id).announcements.published Where published is named_scope :published, :conditions => "announcements.state = 'published'" I use this named scope in many other places and it works fine. What am I doing wrong? (the relationship between Tag and Announcement model is ok because if I remove the ".published" method from that line it works just fine).

    Read the article

  • Backbone inheritance - deep copying

    - by Ed .
    I've seen this question regarding inheritance in Backbone: Backbone.js view inheritance. Useful but doesn't answer my question. The problem I'm experiencing is this: Say I have a class Panel (model in this example); var Panel = Backbone.Model.extend({ defaults : { name : 'my-panel' } }); And then an AdvancedPanel; var AdvancedPanel = Panel.extend({ defaults : { label : 'Click to edit' } }); The following doesn't work: var advancedPanel = new AdvancedPanel(); alert(advancedPanel.get('name')); // Undefined :( JSFiddle here: http://jsfiddle.net/hWmnb/ I guess I can see that I can achieve this myself through some custom extend function that creates a deep copy of the prototype, but this seems like a common thing that people might want from Backbone inheritance, is there a standard way of doing it?

    Read the article

  • Problems with real-valued input deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • deep linking in Excel sheets exported to html

    - by pomarc
    hello everybody, I am working on a project where I must export to html a lot of Excel files. This is pretty straightforward using automation and saving as html. The problem is that many of these sheets have links to worksheets of some other files. I must find a way to write a link to a single inner worksheet. When you export a multisheet excel file to html, excel creates a main htm file, a folder named filename_file, and inside this folder it writes down several files: a css, an xml list of files, a file that creates the tab bar and several html files named sheetxxx.htm, each one representing a worksheet. When you open the main file, you can click the menu bar at the bottom which lets you select the appropriate sheet. This is in fact a link, which replaces a frame content with the sheetxxx.htm file. When this file is loaded a javascript function that selects the right tab gets called. The exported files will be published on a web site. I will have to post process each file and replace every link to the other xls files to the matching htm file, finding a way to open the right worksheet. I think that I could add a parameter to the processed htm file link url, such as myfile.htm?sh=sheet002.htm if I want to link to the second worksheet of myfile.htm (ex myfile.xls). After I've exported them, I could inject a simple javascript into each of the main files which, when they are loaded, could retrieve the sh parameter with jQuery (this is easy) and use this to somehow replace the frSheet frame contents (where the sheets get loaded), opening the right inner sheet and not the default sheet (this is what I call deep linking) mimicking what happens when a user clicks on a tab. This last step is missing... :) I am considering different options, such as replacing the source of the $("frSheet") frame after document.ready. I'd like to hear from you any advice on what could be the best way to realize that in your opinion. any help is greately appreciated, many thanks.

    Read the article

  • Problems with real-valued deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • Deep copy objects from different namespaces

    - by Wasim
    Hi all, I have the following situation: I have class User with the follwing properies : public class User { string user name ; List <Contact> contacts ; List <BookMark> book marks; . . . } I have the same class in a different namespace , with some different properties . BWT , it's the same situation of it's classes (Contact) and (BookMark). I need to make a deep copy of the same properties from the two classes . Actually , I arrive to this situation by having an Entity Framework edmx file . I created the first database (SQL server 2008) from this model . And copied he same edmx file to another project and created the database with SQL CE db. Now I get the first data model objects by WCF service and need to persist them in the local database in my application . The objects are the same but there are some changs because of the modeling issue with a different databse. Do you have any workarround about this assue. Thanks in advance ...

    Read the article

  • Scaling Image to multiple sizes for Deep Zoom

    - by AnthonyWJones
    Lets assume I have a bitmap with a square aspect and width of 2048 pixels. In order to create a set of files need by Silverlight's DeepZoomImageTileSource I need to scale this bitmap to 1024 then to 512 then to 256 etc down to 1 pixel image. There are two, I suspect naive, approaches:- For each image required scale the original full size image to the required size. However it seems excessive to be scaling the full image to the very small sizes. Having scaled from one level to the next discard the original image and scale each sucessive scaled image as the source of the next smaller image. However I suspect that this would generate images in the 256-64 range with poor fidelity than using option 1. Note unlike with the Deep Zoom Composer this tool is expected to act in an on-demand fashion hence it needs to complete in a reasonable timeframe (tops 30 seconds). On the pluse side I'm only creating a single multiscale image not a pyramid of mutliple high-res images. I am outside my comfort zone here, any graphics experts got any advice? Am I wrong about point 2? Is point 1 reasonably performant and I'm worrying about nothing? Option 3?

    Read the article

  • 'Stack level too deep' error in engine-like plugin with globalize

    - by nutsmuggler
    Hello folks. I have built an engine-like plugin thanks to the new features of Rails 2.3. It's a 'Product' module for a CMS, extrapolated from a previously existing (and working) model/controller. The plugin relies on easy_fckeditor and on globalize (description and title field are localised), and I suspect that globalized could be the culprit here... Everything works fine, except for the update action. I get the following error message: (posting just the first lines, all the message is about attribute_methods) stack level too deep /Library/Ruby/Gems/1.8/gems/activerecord-2.3.2/lib/active_record/attribute_methods.rb:64:in `generated_methods?' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.2/lib/active_record/attribute_methods.rb:241:in `method_missing' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.2/lib/active_record/attribute_methods.rb:249:in `method_missing' For referenze, the full error stack is here: http://pastie.org/596546 I've tried to debug eliminating all the input fields, one by one, but I keep getting the error. fckeditor doesn't seem the culprit (error even without fckeditor) This is the action: def update params[:product][:term_ids] ||= [] @product = Product.find(params[:id]) respond_to do |format| if @product.update_attributes(params[:product]) flash[:notice] = t(:Product_was_successfully_updated) format.html { redirect_to products_path } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @product.errors, :status => :unprocessable_entity } end end end As you see it's quite straightforward. Of course I am not hoping someone to solve this question straightaway, I'd just like to have a head up, a suggestion about where to look to solve this issue. Thanks in advance, Davide

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >