Search Results

Search found 1072 results on 43 pages for 'phase'.

Page 11/43 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Microsoft soutient Node.js et participe au développement de la bibliothèque JavaScript client/serveur

    Microsoft soutient Node.js Et participe au développement de la bibliothèque JavaScript client / serveur Sur le blog interoperability Claudio Caldato (Principal Program Manager of Interoperability Srategy Team) annonce que Microsoft va participer au développement d'une version Windows de Node.js Le premier objectif consistera à ajouter à Node une API IOCP Windows performante. Cette phase initiale achevée, un programme exécutable (node.exe) sera disponible sur le site nodejs.org et Node.js fonctionnera alors sur Win...

    Read the article

  • How to properly document functionality in an agile project?

    - by RoboShop
    So recently, we've just finished the first phase of our project. We used agile with fortnightly sprints. And whilst the application turned out well, we're now turning our eyes on some of the maintenance tasks. One maintenance task is that all of our documentation appears in the form of specs. These specs describe 1 or more stories and generally are a body of work which a few devs could knock over in a week. For development, that works really well - every two weeks, the devs get handed a spec and it's a nice discrete chunk of work that they can just do. From a documentation point of view, this has become a mess. The problem with writing specs that are focused on delivering just-in-time requirements to developers is we haven't placed much emphasis on the big picture. Specs come from all different angles - it could be describing a standard function, it could describing parts of a workflow, it could be describing a particular screen... And now, we have business rules about our application scattered across 120 documents. Looking for any document for a particular business rule or function in particular is quite hard because you don't know which document has this information, and making a change request is equally hard because once again, we are unsure about which spec to make the change. So we have maybe a couple of weeks of lull before it's back to specing out functionality for the next phase but in this time, I'd like to re-visit our processes. I think the way we have worked so far in terms of delivering fortnightly specs works well. But we also need a way to manage our documentation so that our business rules for a given function / workflow are easy to locate / change. I have two ideas. One is we compile all of our specs into a series of master specs broken by a few broad functional areas. The specs describe the sprint, the master spec describe the system. The only problem I can see is 1) Our existing 120 specs are not all neatly defined into broad functional areas. Some will require breaking up, merging etc. which will take a lot of time. 2) We'll be writing specs and updating master specs in each new sprint. Seems like double the work, and then do the devs look at the spec or the master spec? My other suggestion is to concede that our documentation is too big of a mess, and manage that mess going forward. So we go through each spec, assign like keywords to it, and then when we want to search for a function, we search for that keyword. Problems I can see 1) Still the problem of business rules scattered everywhere, keywords just make it easier to find it. anyway, if anyone has any decent ideas or any experience to share about how best to manage documentation, would really appreciate it.

    Read the article

  • Google Chrome 5 et ses nombreuses améliorations sortent officiellement et simultanément sur Linux, M

    Mise à jour du 26/05/10 Google Chrome 5 et ses nombreuses améliorations sortent officiellement Et simultanément sur Linux, Mac et Windows L'arrivée de Chrome 6 sur le dev channel le laissait présager (lire ci-avant), Chrome 5 était en phase de finalisation. Ce n'est donc pas une surprise de voir arriver aujourd'hui la version officielle du navigateur de Google avec ses nombreuses améliorations : dont une « de 30 et 35 % aux be...

    Read the article

  • What is lightweight lock in distributed shared memory systems?

    - by Kutluhan Metin
    I started reading Tanenbaum's Distributed Systems book a while ago. I read about two phase locking and timestamp reordering in transactions chapter. While having a deeper look from google I heard of lightweight transactions/lightweight transactional memory. But I couldn't find any good explanation and implementation. So what is lightweight memory? What are the benefits of lightweight locks? And how can I implement them?

    Read the article

  • When is a Kernel update due for 11.10?

    - by Mysterio
    Thanks to Phoronix there seems to be a fix for the power regression/overheating bug in the Linux Kernel 3.0.1 bouncing around on the Internet. However this supposed fix which I read has been patched to the Kernel in a testing phase is not newbie-friendly (if you know what I mean). So I am guessing it will be included in the kernel update for 11.10. If it will please when is it due? Linked Question: Kernel patch that solves battery issues when for ubuntu ?

    Read the article

  • Help yourself . if you like

    - by rachelp
    At Red Gate we enjoy talking to our customers. Really! If you've read recent blog posts by members of some of our customer-facing teams, you'll have spotted the pleasure they take in their work. In case you missed those posts, here they are: From our Finance team: Finance: Friends, not foes! From our reception desk: The Front line of Communication However, we recognise that sometimes our customers would like to be able to solve their problems or answer their questions without talking to us - they're in a hurry, it's outside office hours . or perhaps they just prefer not to pick up the phone and call.   Self-service customer care So we've begun a programme of work to enable more self-service; whether it's finding the answer to a "how do i.?" question or getting access to a record of what product licenses they own, we want to make it much easier for our customers to get hold of this information for themselves. If they want to.   Phase 1: make it easier to find information We decided to start by tackling findability. We've got loads of useful information on our website, but it's sometimes difficult to find, so we've been working on improving our site search. Step 1 has been to replace the search engine, clean up the search UI, and make it consistent across the site. We're nearly there! The idea is that if we improve the site search it will be easier - and much more pleasant - for people to find the information they need. The new search will go live some time in April, and then we'll be gathering feedback, looking at web analytics (more about this in an earlier article), and working out what improvements we still need to make. We'd love to hear what you think, so do give your feedback or drop us a line. Or pick up the phone and call, if you like.   What do you think? While I've got your attention, I'd love to hear what people think about self-service customer care. Do you like to call, email, live chat . or do you prefer to dig around and find out answers yourself? Who's getting it right: what self-service sites do you like? p.s. Watch this space for news of phase 2.

    Read the article

  • Azure : Mobiles Services et Web Sites entrent en production, l'infrastructure stocke 8,5 trillions d'objets et gère 900 000 transactions par seconde

    Windows Azure : Mobiles Services et Web Sites entrent en production L'infrastructure stocke 8,5 trillions d'objets et gère 900 000 transactions par secondeDisponible en Preview depuis août 2012, Windows Azure Mobiles Services est passé en disponibilité générale (GA) avec Windows Azure Web Sites. Une étape qui marque l'entrée de ces services en phase de production. Pour rappel, Windows Azure Mobile Services est une plateforme Backend as a service (BaaS), qui fournit une solution clef en main dans le Cloud, permettant d'accélérer le développement d'applications connectées côté client.

    Read the article

  • Planning a Website and What to Expect

    A successful project begins with careful planning. No matter what the size of the task at hand (whether running errands or plotting for world domination), ample thought needs to be given to the task as a whole before the work begins. This is especially true for website development. Planning the strategy for the site and how the website fits into the larger vision of the project beyond the scope of the online presence is an absolutely essential phase for both the website developer and the client.

    Read the article

  • AWStats is processing log files but does not display them

    - by Wouter
    I've setup AWStats on my VPS to get some more insight into the traffic coming to my site. As instructed I ran a manual build/update which ran fine: sudo -u www-data ./awstats.pl -config=xxxx.com Create/Update database for config "/etc/awstats/awstats.xxxx.com.conf" by AWStats version 6.9 (build 1.925) From data in log file "/usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |"... Phase 1 : First bypass old records, searching new record... Searching new records from beginning of log file... Phase 2 : Now process new records (Flush history on disk after 20000 hosts)... Warning: awstats has detected that some hosts names were already resolved in your logfile /usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |. If DNS lookup was already made by the logger (web server), you should change your setup DNSLookup=1 into DNSLookup=0 to increase awstats speed. Jumped lines in file: 0 Parsed lines in file: 814 Found 0 dropped records, Found 0 corrupted records, Found 0 old records, Found 814 new qualified records. It also produced the file in the DatDir: /var/lib/awstats/awstats052010.xxxx.com.txt which contains what I would expect. BUT when I visit: xxxx.com/awstats/awstats.pl it tells me Last Update: Never updated (See 'Build/Update' on awstats_setup.html page) and the rest of the page is blank. I'm pretty sure I set it up correctly but now I cannot figure out why this is happening. Hopefully someone smarter then me can help me. Thank you in advanced.

    Read the article

  • Ipsec config problem // openswan

    - by user90696
    I try to configure Ipsec on server with openswan as client. But receive error - possible, it's auth error. What I wrote wrong in config ? Thank you for answers. #1: STATE_MAIN_I2: sent MI2, expecting MR2 003 "f-net" #1: received Vendor ID payload [Cisco-Unity] 003 "f-net" #1: received Vendor ID payload [Dead Peer Detection] 003 "f-net" #1: ignoring unknown Vendor ID payload [ca917959574c7d5aed4222a9df367018] 003 "f-net" #1: received Vendor ID payload [XAUTH] 108 "f-net" #1: STATE_MAIN_I3: sent MI3, expecting MR3 003 "f-net" #1: discarding duplicate packet; already STATE_MAIN_I3 010 "f-net" #1: STATE_MAIN_I3: retransmission; will wait 20s for response 003 "f-net" #1: discarding duplicate packet; already STATE_MAIN_I3 003 "f-net" #1: discarding duplicate packet; already STATE_MAIN_I3 003 "f-net" #1: discarding duplicate packet; already STATE_MAIN_I3 010 "f-net" #1: STATE_MAIN_I3: retransmission; will wait 40s for response 031 "f-net" #1: max number of retransmissions (2) reached STATE_MAIN_I3. Possible authentication failure: no acceptable response to our first encrypted message 000 "f-net" #1: starting keying attempt 2 of at most 3, but releasing whack other side - Cisco ASA. parameters for my connection on our Linux server : VPN Gateway 8.*.*.* (Cisco ) Phase 1 Exchange Type Main Mode Identification Type IP Address Local ID 4.*.*.* (our Linux server IP) Remote ID 8.*.*.* (VPN server IP) Authentication PSK Pre Shared Key Diffie-Hellman Key Group DH 5 (1536 bit) or DH 2 (1024 bit) Encryption Algorithm AES 256 HMAC Function SHA-1 Lifetime 86.400 seconds / no volume limit Phase 2 Security Protocol ESP Connection Mode Tunnel Encryption Algorithm AES 256 HMAC Function SHA-1 Lifetime 3600 seconds / 4.608.000 kilobytes DPD / IKE Keepalive 15 seconds PFS off Remote Network 192.168.100.0/24 Local Network 1 10.0.0.0/16 ............... Local Network 5 current openswan config : # config setup klipsdebug=all plutodebug="control parsing" protostack=netkey nat_traversal=no virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off nhelpers=0 conn f-net type=tunnel keyexchange=ike authby=secret auth=esp esp=aes256-sha1 keyingtries=3 pfs=no aggrmode=no keylife=3600s ike=aes256-sha1-modp1024 # left=4.*.*.* leftsubnet=10.0.0.0/16 leftid=4.*.*.* leftnexthop=%defaultroute right=8.*.*.* rightsubnet=192.168.100.0/24 rightid=8.*.*.* rightnexthop=%defaultroute auto=add

    Read the article

  • VSS Post Backup failures for Virtual Server 2005 R2 SP1 virtual machines

    - by califguy4christ
    We've been seeing strange errors with Volume Shadow Copy services on our Virtual Server 2005 R2 SP1 host. It appears to be failing on a strange mountpoint in the C:\WINDOWS\Temp\ folders, which I believe is used by VSS to mount a writeable image file. To summarize: The Microsoft Virtual Server 2005 Writer continually goes into a failed retryable state The Virtual Server log reports errors during the Post Backup phase VSS reports errors backing up a mount point of unknown origins The mount point causes NTFS and ftdisk errors The host is x86 Windows Server 2003 Standard, SP2. The virtual machine is the same. Both use basic disks. Here is the writer state: Writer name: 'Microsoft Virtual Server 2005 Writer' Writer Id: {76afb926-87ad-4a20-a50f-cdc69412ddfc} Writer Instance Id: {78df98e2-bf19-4804-890b-15865efef3bd} State: [11] Failed Last error: Retryable error From the Virtual Server log: Virtual Server - Vss Writer - Event ID: 1035: The VSS writer for Virtual Server failed during the PostBackup phase. The guest shadow copies did not get exposed on the host machine, after mounting all the virtual hard disks of the virtual machine VMACHINE. From the Application log: VSS - None - Event ID: 12290: Volume Shadow Copy Service warning: GetVolumeInformationW( \\?\Volume{fb84bae7-87f5-11dd-9832-001cc4961ca6}\,NULL,0, NULL,NULL,[0x00000000], , 260) == 0x0000045d. hr = 0x00000000. From the System log: Ntfs - Disk - Event ID: 55: The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume C:\WINDOWS\Temp\ {fb84bae7-87f5-11dd-9832-001cc49.... My current theory is that VSS creates a mount point for an image file of the VHD, then the software panics for some reason, leaving everything in an inconsistent state. Removing the mount point doesn't resolve the problem. All of the other disks check out fine with CHKDSK. There's no exclusion option for VHDs or to turn off online backups. Has anyone seen this kind of thing before or point me in the right direction for getting more information about the mount point and it's origins? I haven't been able to trace what application is creating that mount point.

    Read the article

  • Trouble with local id / remote id configuration of VPN

    - by Lynn Owens
    I have a NetGear UTM firewall and a Windows machine running NetGear's VPN client. The Windows machine I can put on the UTM network and take off of it. When I am cabled into the local (internal) the following configuration works: UTM: Local Id: Local Wan IP: (The UTM's WAN IP address) Remote Id: User FQDN: utm_remote1.com Client: Local Id: DNS: utm_remote1.com Remote Id: (The UTM's WAN IP address) Gateway authentication: preshared key Policy remote endpoint: FQDN: utm_remote1.com But when I'm off the UTM's internal local network and simply coming in from the internet, this does not work. It simply repeats SEND phase 1 before giving up. Since I know that the UTM WAN IP is accessible from both inside and outside the network, I figured the problem was with the Client local id. So, I tried the following: UTM: Local Id: Local Wan IP: (The UTM's WAN IP address) Remote Id: (A DN of a self-signed certificate I created for the client and uploaded into the UTM certificates) Client: Local Id: (The DN of the aforementioned self signed cert) Remote Id: (The UTM's WAN IP address) Gateway authentication: (the aforementioned self signed cert) Policy remote end point: ... er, ... my choices are IP and FQDN.... Not sure what to put here No matter what I've tried, it just keeps repeating the SEND phase 1. Any ideas?

    Read the article

  • CheckPoint/Amazon VPC VPN tunnel working inconsistently

    - by Lee
    First time poster, so please be gentle and correct me if there's Server Fault etiquette I'm missing. We have two CheckPoint edge devices at sites A & B, independently managed, connecting to two Amazon private clouds. In both cases, the two Amazon VPCs are in the same community on the CheckPoint device. A VPN tunnel exists between the two CheckPoint devices as well. Between Sites A & B and the Amazon VPC in Northern Virigina, we are unable to keep more than one tunnel up. Both will come up, but tunnel 2 will drop an hour after initiation and will not come back up while tunnel 1 is up. We believe the 1-hour period is due to IPsec phase 2 renegotiation, but can't be sure. On our side, we see the tunnel 2 remote endpoint as not responding to phase 2 negotiation. Between Sites A & B and the Amazon VPC in Oregon, we have no issues. Both tunnels are up and fail over properly. The CheckPoint gateways are using domain-based VPNs. According to CheckPoint's advice to Amazon, this won't work. Yet, in Oregon, it does. We've pursued this with Amazon and, despite the fact it's working in Oregon, they've refused to troubleshoot with us further. Can anyone suggest anything we can do to try to get this stabilized? Going to route-based VPNs is not an option for us.

    Read the article

  • Failed to re-publish a page - Tridion 2011 SP1

    - by Wilson Yu
    We are getting some strange error when re-publishing the same page. The page was published successfully the first time and we can see the page from presentation server. It failed with the following error (see below) when we tried to publish it again (no change to page). The page ran OK within template builder and we got the correct html output, it failed in the last committing deployment step (Prepare Transport, Transporting, Preparing Deployment and Deploying are all successful). Once it fails to publish the second time, it always fails to publish, and we can't un-publish it either. Also when we make a copy of the failed page and create a new page, we can publish the new page first time, the new page then fails to publush the second time with the same error. Does anyone know what would cause this error? any help would be greatly appreciated. Here is the error msg: Committing Deployment Failed Phase: Deployment Prepare Commit Phase failed, Unable to prepare transaction: tcm:0-4210-66560, For input string: "", For input string: "", Unable to prepare transaction: tcm:0-4210-66560, For input string: "", For input string: ""

    Read the article

  • cisco asa + action drop issue

    - by ghp
    Have created a tunnel between 10.x.y.z network and 122.a.b.c ..the tunnel is up and active, but when I try the packet tracer output ..I get the ACTION as drop. I have also enabled same-security-traffic permit intra-interface. Can someone help me what does this drop mean? Result: input-interface: inside input-status: up input-line-status: up output-interface: outside output-status: up output-line-status: up Action: drop Drop-reason: (acl-drop) Flow is denied by configured rule Packet Tracer output @Shane Madden: please find below the packet tracer output. CASA5K-A# CASA5K-A# config t CASA5K-A(config)# packet-tracer input inside tcp 10.x.y.112 0 122.a.b.c 0 Phase: 1 Type: ROUTE-LOOKUP Subtype: input Result: ALLOW Config: Additional Information: in 0.0.0.0 0.0.0.0 outside Phase: 2 Type: ACCESS-LIST Subtype: Result: DROP Config: Implicit Rule Additional Information: Result: input-interface: inside input-status: up input-line-status: up output-interface: outside output-status: up output-line-status: up Action: drop Drop-reason: (acl-drop) Flow is denied by configured rule CASA5K-A(config)# ======================================================================== The access-group are as follows : access-group acl-inbound in interface outside access-group acl-outbound in interface inside and the access-list's are access-list acl-inbound extended permit tcp any any gt 1023 access-list acl-outbound extended permit ip object-group net-Source object net-dest

    Read the article

  • jetty - javax.naming.InvalidNameException: A flat name can only have a single component

    - by Dinesh Pillay
    I have been breaking my head against this for too much time now. I'm trying to get maven + jetty + jotm to play nice but it looks like its too much to ask for :( Below is my jetty.xml:- <?xml version="1.0"?> <!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd"> <Configure id="Server" class="org.mortbay.jetty.Server"> <New id="jotm" class="org.objectweb.jotm.Jotm"> <Arg type="boolean">true</Arg> <Arg type="boolean">false</Arg> <Call id="tm" name="getTransactionManager" /> <Call id="ut" name="getUserTransaction" /> </New> <New class="org.mortbay.jetty.plus.naming.Resource"> <Arg /> <Arg>javax.transaction.TransactionManager</Arg> <Arg><Ref id="ut" /></Arg> </New> <New id="tx" class="org.mortbay.jetty.plus.naming.Transaction"> <Arg><Ref id="ut" /></Arg> </New> <New class="org.mortbay.jetty.plus.naming.Resource"> <Arg>myxadatasource</Arg> <Arg> <New id="myxadatasourceA" class="org.enhydra.jdbc.standard.StandardXADataSource"> <Set name="DriverName">org.apache.derby.jdbc.EmbeddedDriver</Set> <Set name="Url">jdbc:derby:protodb;create=true</Set> <Set name="User"></Set> <Set name="Password"></Set> <Set name="transactionManager"> <Ref id="tm" /> </Set> </New> </Arg> </New> <New id="protodb" class="org.mortbay.jetty.plus.naming.Resource"> <Arg>jdbc/protodb</Arg> <Arg> <New class="org.enhydra.jdbc.pool.StandardXAPoolDataSource"> <Arg> <Ref id="myxadatasourceA" /> </Arg> <Set name="DataSourceName">myxadatasource</Set> </New> </Arg> </New> And this is the maven plugin configuration:- <plugin> <groupId>org.mortbay.jetty</groupId> <artifactId>maven-jetty-plugin</artifactId> <configuration> <scanIntervalSeconds>10</scanIntervalSeconds> <stopKey>ps</stopKey> <stopPort>7777</stopPort> <webAppConfig> <contextPath>/ps</contextPath> </webAppConfig> <connectors> <connector implementation="org.mortbay.jetty.nio.SelectChannelConnector"> <port>7070</port> <maxIdleTime>60000</maxIdleTime> </connector> </connectors> <jettyConfig>src/main/webapp/WEB-INF/jetty.xml</jettyConfig> </configuration> <executions> <execution> <id>start-jetty</id> <phase>pre-integration-test</phase> <goals> <goal>run</goal> </goals> <configuration> <scanIntervalSeconds>0</scanIntervalSeconds> <daemon>true</daemon> </configuration> </execution> <execution> <id>stop-jetty</id> <phase>post-integration-test</phase> <goals> <goal>stop</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>org.apache.derby</groupId> <artifactId>derby</artifactId> <version>10.6.1.0</version> </dependency> <dependency> <groupId>jotm</groupId> <artifactId>jotm</artifactId> <version>2.0.10</version> <exclusions> <exclusion> <groupId>javax.resource</groupId> <artifactId>connector</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>com.experlog</groupId> <artifactId>xapool</artifactId> <version>1.5.0</version> </dependency> <dependency> <groupId>javax.resource</groupId> <artifactId>connector-api</artifactId> <version>1.5</version> </dependency> <dependency> <groupId>javax.transaction</groupId> <artifactId>jta</artifactId> <version>1.0.1B</version> </dependency> <!-- <dependency> <groupId>javax.jts</groupId> <artifactId>jts</artifactId> <version>1.0</version> </dependency> --> </dependencies> </plugin> I am using maven-jetty-plugin-6.1.24 cause I couldn't get the later one's to work either. When I execute this I get the following exception:- 2010-06-16 09:03:13.423:WARN::Config error at javax.transaction.TransactionManager java.lang.reflect.InvocationTargetException [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failure A flat name can only have a single component [INFO] ------------------------------------------------------------------------ Caused by: javax.naming.InvalidNameException: A flat name can only have a single component at javax.naming.NameImpl.addAll(NameImpl.java:621) at javax.naming.CompoundName.addAll(CompoundName.java:442) at org.mortbay.jetty.plus.naming.NamingEntryUtil.makeNamingEntryName(NamingEntryUtil.java:136) at org.mortbay.jetty.plus.naming.NamingEntry.save(NamingEntry.java:196) at org.mortbay.jetty.plus.naming.NamingEntry.(NamingEntry.java:58) at org.mortbay.jetty.plus.naming.Resource.(Resource.java:34) ... 31 more Help!

    Read the article

  • Problems with real-valued input deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • Problems with real-valued deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • Eclipse (STS), Maven and maven-minify-plugin, can they work together?

    - by CodeReaper
    Hi, I am working on a project where I am in charge of html, css and javascript. I found this maven-minify-plugin that seemed to just what I wanted. Everything is good when I deploy using maven on the server, but when I am using Eclipse (STS, www.springsource.com/products/sts) to run the project on localhost no css nor js file is generated by the plugin. Does anyone have experience with this Maven plugin, so they can tell me if it should be possible or not run on localhost? Does anyone have knowledge of another plugin I can use to (combine and) minify javascript and css files when running on localhost in Eclipse and also when deploying using Maven? Any help appreciated... ----extra information---- I basicly just copied in what it said on the plugin webpage, so I have these bits in my pom.xml: .... <build> <plugins> .... <plugin> <groupId>com.samaxes.maven</groupId> <artifactId>maven-minify-plugin</artifactId> <version>1.1</version> <executions> <execution> <id>default-minify</id> <phase>process-resources</phase> <configuration> <cssFiles> .... <param>forms.css</param> <param>jquery.droppy.css</param> <param>jquery.jgrowl.css</param> </cssFiles> <jsFiles> .... <param>jquery.droppy.js</param> <param>jquery.jgrowl.js</param> </jsFiles> <jsFinalFile>script.js</jsFinalFile> <linebreak>-1</linebreak> <nomunge>false</nomunge> <verbose>false</verbose> <preserveAllSemiColons>false</preserveAllSemiColons> <disableOptimizations>false</disableOptimizations> <bufferSize>4096</bufferSize> </configuration> <goals> <goal>minify</goal> </goals> </execution> </executions> </plugin> </plugins> </build> .... Should/Can I bind the plugin to a difference phase? I just use mvn clean package and move the snapshot into tomcat to deploy on the server. I am unsure on how to explain how I run the webapp on localhost, but here goes. I have a vanilia tomcat, that I defined as a server in Eclipse and then defined that the webapp should always build in that "server".

    Read the article

  • developing maven plugin, how to exclude bitkeeper files

    - by Denali
    Hi There, I am trying to write my first maven plugin. I'd like to exclude all the java files related to the source repository I'm using, which is BitKeeper. These files live in directories called SCCS. I can't for the life of me figure out how to do this. When I add the maven-compile-plugin with excludes data, it works (the bk files are excluded) if I specify mvn compiler:compile. But this is not binding to the compile phase. So that when I run mvn compile, it blows up trying to compile a source control specific java file. Any help or pointers appreciated. Another thing to note: Everything works perfectly if I change the packaging from "maven-plugin" to "jar", which of course, I can't do permanently since this is a maven plugin I am trying to write. I'm sorry if this is answered elsewhere. I've looked around for several hours here and through the maven docs, but everything on this topic seems to be related to writing code which will be packaged in jars, not maven plugins. Here's my pom.xml: <project> <modelVersion>4.0.0</modelVersion> <groupId>com.mycomp.mygroup</groupId> <artifactId>special-persistence-plugin</artifactId> <packaging>maven-plugin</packaging> <version>1.0-SNAPSHOT</version> <name>Special Persistence Plugin</name> <dependencies> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-plugin-api</artifactId> <version>2.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> <encoding>UTF-8</encoding> <excludes> <exclude>**/SCCS/**/*.java</exclude> </excludes> <phase>compile</phase> <goals> <goal>compiler:compile</goal> </goals> </configuration> </plugin> </plugins> </build> </project> Thank you to anyone with ideas about this, -Denali

    Read the article

  • What is Agile Modeling and why do I need it?

    What is Agile Modeling and why do I need it? Agile Modeling is an add-on to existing agile methodologies like Extreme programming (XP) and Rational Unified Process (RUP). Agile Modeling enables developers to develop a customized software development process that actually meets their current development needs and is flexible enough to adjust in the future. According to Scott Ambler, Agile Modeling consists of five core values that enable this methodology to be effective and light weight Agile Modeling Core Values: Communication Simplicity Feedback Courage Humility Communication is a key component to any successful project. Open communication between stakeholder and the development team is essential when developing new applications or maintaining legacy systems. Agile models promote communication amongst software development teams and stakeholders. Furthermore, Agile Models provide a common understanding of an application for members of a software development team allowing them to have a universal common point of reference. The use of simplicity in Agile Models enables the exploration of new ideas and concepts through the use of basic diagrams instead of investing the time in writing tens or hundreds of lines of code. Feedback in regards to application development is essential. Feedback allows a development team to confirm that the development path is on track. Agile Models allow for quick feedback from shareholders because minimal to no technical expertise is required to understand basic models. Courage is important because you need to make important decisions and be able to change direction by either discarding or refactoring your work when some of your decisions prove inadequate, according to Scott Ambler. As a member of a development team, we must admit that we do not know everything even though some of us think we do. This is where humility comes in to play. Everyone is a knowledge expert in their own specific domain. If you need help with your finances then you would consult an accountant. If you have a problem or are in need of help with a topic why would someone not consult with a subject expert? An effective approach is to assume that everyone involved with your project has equal value and therefore should be treated with respect. Agile Model Characteristics: Purposeful Understandable Sufficiently Accurate Sufficiently Consistent Sufficiently Detailed Provide Positive Value Simple as Possible Just Fulfill Basic Requirements According to Scott Ambler, Agile models are the most effective possible because the time that is invested in the model is just enough effort to complete the job. Furthermore, if a model isn’t good enough yet then additional effort can be invested to get more value out of the model. However if a model is good enough, for the current needs, or surpass the current needs, then any additional work done on the model would be a waste. It is important to remember that good enough is in the eye of the beholder, so this can be tough. In order for Agile Models to work effectively Active Stakeholder need to participation in the modeling process. Finally it is also very important to model with others, this allows for additionally input ensuring that all the shareholders needs are reflected in the models. How can Agile Models be incorporated in to our projects? Agile Models can be incorporated in to our project during the requirement gathering and design phases. As requirements are gathered the models should be updated to incorporate the new project details as they are defined and updated. Additionally, the Agile Models created during the requirement phase can be the bases for the models created during the design phase.  It is important to only add to the model when the changes fit within the agile model characteristics and they do not over complicate the design.

    Read the article

  • Iterative and Incremental Principle Series 4: Iteration Planning – (a.k.a What should I do today?)

    - by llowitz
    Welcome back to the fourth of a five part series on applying the Iteration and Incremental principle.  During the last segment, we discussed how the Implementation Plan includes the number of the iterations for a project, but not the specifics about what will occur during each iteration.  Today, we will explore Iteration Planning and discuss how and when to plan your iterations. As mentioned yesterday, OUM prescribes initially planning your project approach at a high level by creating an Implementation Plan.  As the project moves through the lifecycle, the plan is progressively refined.  Specifically, the details of each iteration is planned prior to the iteration start. The Iteration Plan starts by identifying the iteration goal.  An example of an iteration goal during the OUM Elaboration Phase may be to complete the RD.140.2 Create Requirements Specification for a specific set of requirements.  Another project may determine that their iteration goal is to focus on a smaller set of requirements, but to complete both the RD.140.2 Create Requirements Specification and the AN.100.1 Prepare Analysis Specification.  In an OUM project, the Iteration Plan needs to identify both the iteration goal – how far along the implementation lifecycle you plan to be, and the scope of work for the iteration.  Since each iteration typically ranges from 2 weeks to 6 weeks, it is important to identify a scope of work that is achievable, yet challenging, given the iteration goal and timeframe.  OUM provides specific guidelines and techniques to help prioritize the scope of work based on criteria such as risk, complexity, customer priority and dependency.  In OUM, this prioritization helps focus early iterations on the high risk, architecturally significant items helping to mitigate overall project risk.  Central to the prioritization is the MoSCoW (Must Have, Should Have, Could Have, and Won’t Have) list.   The result of the MoSCoW prioritization is an Iteration Group.  This is a scope of work to be worked on as a group during one or more iterations.  As I mentioned during yesterday’s blog, it is pointless to plan my daily exercise in advance since several factors, including the weather, influence what exercise I perform each day.  Therefore, every morning I perform Iteration Planning.   My “Iteration Plan” includes the type of exercise for the day (run, bike, elliptical), whether I will exercise outside or at the gym, and how many interval sets I plan to complete.    I use several factors to prioritize the type of exercise that I perform each day.  Since running outside is my highest priority, I try to complete it early in the week to minimize the risk of not meeting my overall goal of doing it twice each week.  Regardless of the specific exercise I select, I follow the guidelines in my Implementation Plan by applying the 6-minute interval sets.  Just as in OUM, the iteration goal should be in context of the overall Implementation Plan, and the iteration goal should move the project closer to achieving the phase milestone goals. Having an Implementation Plan details the strategy of what I plan to do and keeps me on track, while the Iteration Plan affords me the flexibility to juggle what I do each day based on external influences thus maximizing my overall success. Tomorrow I’ll conclude the series on applying the Iterative and Incremental approach by discussing how to manage the iteration duration and highlighting some benefits of applying this principle.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >