Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 962/2416 | < Previous Page | 958 959 960 961 962 963 964 965 966 967 968 969  | Next Page >

  • Splitting an HTTP request into multiple byte-range requests

    - by redpola
    I have arrived at the unusual situation of having two completely independent Internet connections to my home. This has the advantage of redundancy etc but the drawback that both connections max out at about 6Mb/s. So one individual outbound http request is directed by my "intelligent gateway" (TP-LINK ER6120) out over one or the other connection for its lifetime. This works fine over complex web pages and utilises both external connects fine. However, single-http-request downloads are limited to the maximum rate of one of the two connections. So I'm thinking, surely I can setup some kind of proxy server to direct all my http requests to. For each incoming http request, the proxy server will issue multiple byte-range requests for the desired data and manage the reassembly and delivery of that data to the client's request. I can see this has some overhead, and also some edge cases where there will be blocking problems waiting for data. I also imagine webmasters of single-servers would rather I didn't hit them with 8 byte-range requests instead of one request. How can I achieve this http request deconstruct/reconstruction? Or am I just barking mad?

    Read the article

  • Rails Named Scope and overlapping conditions

    - by Tumtu
    Hi everyone, have a question about rails SQL generation: class Organization < ActiveRecord::Base has_many :people named_scope :active, :conditions => { :active => 'Yes' } end class Person < ActiveRecord::Base belongs_to :organization end Rails SQL for all active people in the first organiztion Organization.first.people.active.all [4;36;1mOrganization Load (0.0ms)[0m [0;1mSELECT TOP 1 * FROM [organizations] [0m [4;35;1mPerson Load (0.0ms)[0m [0mSELECT * FROM [people] WHERE ((([people].[active] = 'Yes') AND ([people].organization_id = 1)) AND ([people].organization_id = 1)) [0m Why Rails generates "[people].organization_id = 1" condition twice ? Does someone know how to make it DRY ? e.g. SELECT * FROM [people] WHERE (([people].[active] = 'Yes') AND ([people].organization_id = 1))

    Read the article

  • Recommendations for a programmable drivers license scanner?

    - by Slapout
    Our motor pool wants to scan drivers’ licenses and have the data imported into our custom system. We're looking for something that will allow us to programmatically get the data from the scanner (including the picture) and let us insert it into our application. I was wondering if anyone has had experience with this type of system and could recommend one or tell us which ones to avoid. Our application is written in PowerBuilder and uses a DB2 database.

    Read the article

  • jQuery i++ and i-- problems ... what on earth???

    - by michael
    Could someone please tell me what I am doing wrong? I'm not a newbie at programming but I feel like it tonight! Every time I increment the incrementing variable it throws a fit! When add one to it, it behaves fine, but if I try to add one more to it it wants to add 2 more. And then if I try to de-increment it wants to subtract from the original number that it was assigned to. I've tried: i++; i = i+1; i = i++; Nothing seems to work. It's got to be a stupid mistake. Press the buttons to increment and de-increment. http://michaelreynolds.net/iphone/ here's the code: var dayNum = 30; //---------------------------------------------------------------------- $.jQTouch({ icon: 'dailyqoteicon.png', statusBar: false, initializeTouch: 'a.touch' }); //---------------------------------------------------------------------- $(document).ready(function(){ //$(function(){}); $(function(){ $('a.touch').swipe( function(event, info){ //alert("jQTouch swipe event"); //alert(info.direction); }); }); $(function updateVerse(){ //alert("updateVerse called"); $.ajax({ type: "GET", dataType: 'JSON', data: 'day='+ dayNum, url: 'forward.php', success: function(data){ var obj = $.parseJSON(data); $("h2.quote").html(""); $("h3.reference").html(""); $("h2.quote").append(obj.quote); $("h3.reference").append(obj.reference, " ", obj.version); //$("span.version").append(obj.version); //----------------------------------- // JSON string {"id":"1","quote":"For to me, to live is Christ, and to die is gain","reference":"Philippians 1:21","version":"NKJV"} }, error: function(request, error){ alert("problem retrieving json data string"); } }); function addDayNum(){ dayNum = dayNum + 1; //dayNum = dayNum++; } function subDayNum(){ dayNum = dayNum - 1; //dayNum = dayNum--; } $("div#header a.next").tap( function(){ addDayNum(); //dayNum++;// doesn't work at all //dayNum = dayNum + 1;//doesn't work at all updateVerse(); //alert(dayNum); //alert("next clicked"); }); $("div#header a.prev").live('click', function(){ subDayNum(); //dayNum--;//doesn't work at all //dayNum = dayNum - 1;// doesn't work at all updateVerse(); //alert(dayNum); //alert("previous clicked"); }); }); });

    Read the article

  • SQL - get latest records from table where field is unique

    - by 89stevenharris
    I have a table of data as follows id status conversation_id message_id date_created 1 1 1 72 2012-01-01 00:00:00 2 2 1 87 2012-03-03 00:00:00 3 2 2 95 2012-05-05 00:00:00 I want to get all the rows from the table in date_created DESC order, but only one row per conversation_id. So in the case of the example data above, I would want to get the rows with id 2 and 3. Any advice is much appreciated.

    Read the article

  • How to change last letter of filename to lowercase if it is a letter?

    - by Robert Buckley
    I have been given data which cannot be interpreted by my software unless it has a lowercase letter at the end. The data was delivered with an uppercase letter at the end. Somehow I need to first recursively loop through all folders and find whether the filename ends with a letter and then change it to lowercase. I think python could do this, but I don´t know how,. Any help would be great! yours, Rob

    Read the article

  • Bizarre WHERE col = NULL behavior

    - by Kenneth
    This is a problem one of our developers brought to me. He stumbled across an old stored procedure which used 'WHERE col = NULL' several times. When the stored procedure is executed it returns data. If the query inside the stored procedure is executed manually it will not return data unless the 'WHERE col = NULL' references are changed to 'WHERE col IS NULL'. Can anyone explain this behavior?

    Read the article

  • PHP Session Class and $_SESSION Array

    - by Gianluca Bargelli
    Hello, i've implemented this custom PHP Session Class for storing sessions into a MySQL database: class Session { private $_session; public $maxTime; private $database; public function __construct(mysqli $database) { $this->database=$database; $this->maxTime['access'] = time(); $this->maxTime['gc'] = get_cfg_var('session.gc_maxlifetime'); session_set_save_handler(array($this,'_open'), array($this,'_close'), array($this,'_read'), array($this,'_write'), array($this,'_destroy'), array($this,'_clean') ); register_shutdown_function('session_write_close'); session_start();//SESSION START } public function _open() { return true; } public function _close() { $this->_clean($this->maxTime['gc']); } public function _read($id) { $getData= $this->database->prepare("SELECT data FROM Sessions AS Session WHERE Session.id = ?"); $getData->bind_param('s',$id); $getData->execute(); $allData= $getData->fetch(); $totalData = count($allData); $hasData=(bool) $totalData >=1; return $hasData ? $allData['data'] : ''; } public function _write($id, $data) { $getData = $this->database->prepare("REPLACE INTO Sessions VALUES (?, ?, ?)"); $getData->bind_param('sss', $id, $this->maxTime['access'], $data); return $getData->execute(); } public function _destroy($id) { $getData=$this->database->prepare("DELETE FROM Sessions WHERE id = ?"); $getData->bind_param('S', $id); return $getData->execute(); } public function _clean($max) { $old=($this->maxTime['access'] - $max); $getData = $this->database->prepare("DELETE FROM Sessions WHERE access < ?"); $getData->bind_param('s', $old); return $getData->execute(); } } It works well but i don't really know how to properly access the $_SESSION array: For example: $db=new DBClass();//This is a custom database class $session=new Session($db->getConnection()); if (isset($_SESSION['user'])) { echo($_SESSION['user']);//THIS IS NEVER EXECUTED! } else { $_SESSION['user']="test"; Echo("Session created!"); } At every page refresh it seems that $_SESSION['user'] is somehow "resetted", what methods can i apply to prevent such behaviour?

    Read the article

  • Little more help with writing a o buffer with libjpeg

    - by Richard Knop
    So I have managed to find another question discussing how to use the libjpeg to compress an image to jpeg. I have found this code which is supposed to work: Compressing IplImage to JPEG using libjpeg in OpenCV Here's the code (it compiles ok): /* This a custom destination manager for jpeglib that enables the use of memory to memory compression. See IJG documentation for details. */ typedef struct { struct jpeg_destination_mgr pub; /* base class */ JOCTET* buffer; /* buffer start address */ int bufsize; /* size of buffer */ size_t datasize; /* final size of compressed data */ int* outsize; /* user pointer to datasize */ int errcount; /* counts up write errors due to buffer overruns */ } memory_destination_mgr; typedef memory_destination_mgr* mem_dest_ptr; /* ------------------------------------------------------------- */ /* MEMORY DESTINATION INTERFACE METHODS */ /* ------------------------------------------------------------- */ /* This function is called by the library before any data gets written */ METHODDEF(void) init_destination (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->pub.next_output_byte = dest->buffer; /* set destination buffer */ dest->pub.free_in_buffer = dest->bufsize; /* input buffer size */ dest->datasize = 0; /* reset output size */ dest->errcount = 0; /* reset error count */ } /* This function is called by the library if the buffer fills up I just reset destination pointer and buffer size here. Note that this behavior, while preventing seg faults will lead to invalid output streams as data is over- written. */ METHODDEF(boolean) empty_output_buffer (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->pub.next_output_byte = dest->buffer; dest->pub.free_in_buffer = dest->bufsize; ++dest->errcount; /* need to increase error count */ return TRUE; } /* Usually the library wants to flush output here. I will calculate output buffer size here. Note that results become incorrect, once empty_output_buffer was called. This situation is notified by errcount. */ METHODDEF(void) term_destination (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->datasize = dest->bufsize - dest->pub.free_in_buffer; if (dest->outsize) *dest->outsize += (int)dest->datasize; } /* Override the default destination manager initialization provided by jpeglib. Since we want to use memory-to-memory compression, we need to use our own destination manager. */ GLOBAL(void) jpeg_memory_dest (j_compress_ptr cinfo, JOCTET* buffer, int bufsize, int* outsize) { mem_dest_ptr dest; /* first call for this instance - need to setup */ if (cinfo->dest == 0) { cinfo->dest = (struct jpeg_destination_mgr *) (*cinfo->mem->alloc_small) ((j_common_ptr) cinfo, JPOOL_PERMANENT, sizeof (memory_destination_mgr)); } dest = (mem_dest_ptr) cinfo->dest; dest->bufsize = bufsize; dest->buffer = buffer; dest->outsize = outsize; /* set method callbacks */ dest->pub.init_destination = init_destination; dest->pub.empty_output_buffer = empty_output_buffer; dest->pub.term_destination = term_destination; } /* ------------------------------------------------------------- */ /* MEMORY SOURCE INTERFACE METHODS */ /* ------------------------------------------------------------- */ /* Called before data is read */ METHODDEF(void) init_source (j_decompress_ptr dinfo) { /* nothing to do here, really. I mean. I'm not lazy or something, but... we're actually through here. */ } /* Called if the decoder wants some bytes that we cannot provide... */ METHODDEF(boolean) fill_input_buffer (j_decompress_ptr dinfo) { /* we can't do anything about this. This might happen if the provided buffer is either invalid with regards to its content or just a to small bufsize has been given. */ /* fail. */ return FALSE; } /* From IJG docs: "it's not clear that being smart is worth much trouble" So I save myself some trouble by ignoring this bit. */ METHODDEF(void) skip_input_data (j_decompress_ptr dinfo, INT32 num_bytes) { /* There might be more data to skip than available in buffer. This clearly is an error, so screw this mess. */ if ((size_t)num_bytes > dinfo->src->bytes_in_buffer) { dinfo->src->next_input_byte = 0; /* no buffer byte */ dinfo->src->bytes_in_buffer = 0; /* no input left */ } else { dinfo->src->next_input_byte += num_bytes; dinfo->src->bytes_in_buffer -= num_bytes; } } /* Finished with decompression */ METHODDEF(void) term_source (j_decompress_ptr dinfo) { /* Again. Absolute laziness. Nothing to do here. Boring. */ } GLOBAL(void) jpeg_memory_src (j_decompress_ptr dinfo, unsigned char* buffer, size_t size) { struct jpeg_source_mgr* src; /* first call for this instance - need to setup */ if (dinfo->src == 0) { dinfo->src = (struct jpeg_source_mgr *) (*dinfo->mem->alloc_small) ((j_common_ptr) dinfo, JPOOL_PERMANENT, sizeof (struct jpeg_source_mgr)); } src = dinfo->src; src->next_input_byte = buffer; src->bytes_in_buffer = size; src->init_source = init_source; src->fill_input_buffer = fill_input_buffer; src->skip_input_data = skip_input_data; src->term_source = term_source; /* IJG recommend to use their function - as I don't know **** about how to do better, I follow this recommendation */ src->resync_to_restart = jpeg_resync_to_restart; } All I need to do is replace the jpeg_stdio_dest in my program with this code: int numBytes = 0; //size of jpeg after compression char * storage = new char[150000]; //storage buffer JOCTET *jpgbuff = (JOCTET*)storage; //JOCTET pointer to buffer jpeg_memory_dest(&cinfo,jpgbuff,150000,&numBytes); So I need some help to incorporate the above four lines into this function which now works but writes to a file instead of a memory: int write_jpeg_file( char *filename ) { struct jpeg_compress_struct cinfo; struct jpeg_error_mgr jerr; /* this is a pointer to one row of image data */ JSAMPROW row_pointer[1]; FILE *outfile = fopen( filename, "wb" ); if ( !outfile ) { printf("Error opening output jpeg file %s\n!", filename ); return -1; } cinfo.err = jpeg_std_error( &jerr ); jpeg_create_compress(&cinfo); jpeg_stdio_dest(&cinfo, outfile); /* Setting the parameters of the output file here */ cinfo.image_width = width; cinfo.image_height = height; cinfo.input_components = bytes_per_pixel; cinfo.in_color_space = color_space; /* default compression parameters, we shouldn't be worried about these */ jpeg_set_defaults( &cinfo ); /* Now do the compression .. */ jpeg_start_compress( &cinfo, TRUE ); /* like reading a file, this time write one row at a time */ while( cinfo.next_scanline < cinfo.image_height ) { row_pointer[0] = &raw_image[ cinfo.next_scanline * cinfo.image_width * cinfo.input_components]; jpeg_write_scanlines( &cinfo, row_pointer, 1 ); } /* similar to read file, clean up after we're done compressing */ jpeg_finish_compress( &cinfo ); jpeg_destroy_compress( &cinfo ); fclose( outfile ); /* success code is 1! */ return 1; } Anybody could help me out a bit with it? I've tried meddling with it but I am not sure how to do it. I I just replace this line: jpeg_stdio_dest(&cinfo, outfile); It's not going to work. There is more stuff that needs to be changed a bit in that function and I am being a little lost from all those pointers and memory management.

    Read the article

  • Sockets and multithreading

    - by V0idExp
    Hi to all! I have an interesting (to me) problem... There are two threads, one for capturing data from std input and sending it through socket to server, and another one which receives data from blocking socket. So, when there's no reply from server, recv() call waits indefenitely, right? But instead of blocking only its calling thread, it blocks the overall process! Why this thing occurs?

    Read the article

  • Keeping a web request alive.

    - by The Machine
    I have a web application , that helps download reports. But the report generation sometimes takes a lot of time, and the web request times out through the intermediate proxy server.(Timeout :90 secs). The workflow for downloading the report is straightforward. Client sends request to the web server. The web server generates the report , and makes it available to the client as an excel download. The excel is generated using POI and the download is provided using Spring's AbstractExcelView. What could be the best way to keep the web request alive(without increasing the timeout , of course) ?

    Read the article

  • javascript JSONP callback function not defined

    - by bitsMix
    ( function restoreURL() { function turnLongURL(data) { window.location = data.url; } var shortUrl = window.location.href; var url = "http://json-longurl.appspot.com/?url=" + shortUrl + "&callback=turnLongURL"; var script = document.createElement('script'); script.setAttribute('src', url); document.getElementsByTagName('head')[0].appendChild(script); })(); code is above, but the firebug told me, turnLongURL is not defined why is that?

    Read the article

  • How to return xml from .net webservice

    - by kaibuki
    Hi Guys!! I am reading data and filling a data set and want to return xml, in a .net web service. so far I am trying to use return mydataset.getxml(); but it is not helping as my method return type is "DataSet" so is there any way I can get a well formatted xml. Thanks

    Read the article

  • Ajax and PHP problem not sending mail

    - by Dumbledore of flash
    Hi all , I have a problem here, I have two files form.php and index.php , my form.php has an ajax to fetch data from index.php , also my index.php has a mail function which is running perfectly when we run index.php directly, but when i form.php to fetch data from index.php this mail function is not running ..... can any body tell me whats the problem why ajax does not make my index.php send mail ?????

    Read the article

  • Which is the correct way to use PDO in PHP?

    - by Runner
    One from here: $sth->execute(array(':calories' => $calories, ':colour' => $colour)); The other from here: /*** reassign the variables again ***/ $data = array('animal_id'=>4, 'animal_name' => 'bruce'); /*** execute the prepared statement ***/ $stmt->execute($data); My question is: :key or key ? Sorry I don't have the PHP environment here.

    Read the article

  • Convert file to html table with PERL

    - by user329313
    Hi everyone, I am trying to write a simple Perl CGI script that: -runs a CLI script -reads the resulting .out file and converts the data in the file to an HTML table. Here is some sample data from the .out file: 10.255.202.1 2472327594 1720341 10.255.202.21 2161941840 1484352 10.255.200.0 1642646268 1163742 10.255.200.96 1489876452 1023546 10.255.200.26 1289738466 927513 10.255.202.18 1028316222 706959 10.255.200.36 955477836 703926 Any help would be much appreciated. -Sebastian

    Read the article

  • Manage groups of build configurations in Hudson

    - by Lóránt Pintér
    I'm using Hudson to build my application. I have several branches that come and go. Whenever there's a new branch, I have to set up the following builds for it: a continuous build that runs after every change in SVN a nightly build a nightly site generation (I'm using Maven under the hood) and a weekly integration build for some branches currently this means I need to copy four template configurations and set them up with the branch URL. I don't like this for two reasons: It's redundant, so modifying something is error-prone and takes a lot of time. I need four full checkouts of the product per branch on every build slave, plus four separate private Maven repository, not to mention the built artifacts. This is a lot of space wasted. What I'd like instead is to have one workspace and one configuration for allthese builds. Is this possible with Hudson?

    Read the article

  • Why do you have to mark a class with the attribute [serializable] ?

    - by Blankman
    Seeing as you can convert any document to a byte array and save it to disk, and then rebuild the file to its original form (as long as you have meta data for its filename etc.). Why do you have to mark a class with [Serializable] etc? Is that just the same idea, "meta data" type information so when you cast the object to its class things are mapped properly?

    Read the article

  • Why is C# statically typed?

    - by terrani
    I am a PHP web programmer who is trying to learn C#. I would like to know why C# requires me to specify the data type when creating a variable. Class classInstance = new Class(); Why do we need to know the data type before a class instance?

    Read the article

  • C vs. C++ for performance in memory allocation

    - by Andrei
    Hi, I am planning to participate in development of a code written in C language for Monte Carlo analysis of complex problems. This codes allocates huge data arrays in memory to speed up its performance, therefore the author of the code has chosen C instead of C++ claiming that one can make faster and more reliable (concerning memory leaks) code with C. Do you agree with that? What would be your choice, if you need to store 4-16 Gb of data arrays in memory during calculation?

    Read the article

  • How to access the java vaiables in the mule flows

    - by RohanRasane
    Scenario I have a variable in the Java file, which I want to access in the mule config xml. How do I do that. Example - There is a web service which passes params like this localhost/apiname?name="dynamic data" So while hitting the web service I want to pass param "name" as a dynamic data. How do I do that. I assume if I'm able to access the Java variable in the xml file then that will be possible.

    Read the article

  • Problems with real-valued input deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • Mail-Merge on Steroids: Can Word 2003 do this?

    - by richardtallent
    I have a huge report to put together, made up of over 1,000 smaller, nearly-identical reports. Each report includes: General 1:1 information (basic mail-merge stuff) Lots of text, some of which may need to be disabled or have alternate text based on a boolean field. A few embedded images, preferably loaded via HTTP URL, but if they have to be on the a file system thing I can do that. (Filenames will be provided as a field in the data source.) Fortunately, all images are roughly the same size/shape. Several 1:m tables with a few fields apiece. The kicker is the master/child tables. I've seen examples for Word 2000 for doing this by left-joining the master and child table and using some IF/THEN logic to know whether to jump to the next master record. But in my case, I have several of these subtables, so that approach won't really work. So, can Word 2003 handle arbitrary master/child tables? If so, how? If not, I considered InfoPath, but I haven't used it before, and it seems to be made for data entry, not long formatted reports. I'm a software developer, so I could always hack something together with a massive VBA macro, or generating the report in HTML on the web server (where the data is coming from anyway). But I'm hoping Word will work without such gymnastics, since it will give the ultimate users of the report template better control over formatting and making minor changes.

    Read the article

  • app engine modify xml

    - by Hoax
    Hi I am writing a GWT app running on App Engine, which needs to modify a XML File serverside. As far as I know there is no way to modify a XML file in the WAR directory or any subdirectories. What other possibilities do I have to store that data? Can I use the Data Store somehow or should I look for storage space somewhere else and access it there (if so any recommendations?)? Any help is appreciated!

    Read the article

< Previous Page | 958 959 960 961 962 963 964 965 966 967 968 969  | Next Page >