Search Results

Search found 5157 results on 207 pages for 'node'.

Page 2/207 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to write reusable code in node.js

    - by lortabac
    I am trying to understand how to design node.js applications, but it seems there is something I can't grasp about asynchronous programming. Let's say my application needs to access a database. In a synchronous environment I would implement a data access class with a read() method, returning an associative array. In node.js, because code is executed asynchronously, this method can't return a value, so, after execution, it will have to "do" something as a side effect. It will then contain some code which does something else than just reading data. Let's suppose I want to call this method multiple times, each time with a different success callback. Since the callback is included in the method itself, I can't find a clean way to do this without either duplicating the method or specifying all possible callbacks in a long switch statement. What is the proper way to handle this problem? Am I approaching it the wrong way?

    Read the article

  • Programmatically disclosing a node in af:tree and af:treeTable

    - by Frank Nimphius
    A common developer requirement when working with af:tree or af:treeTable components is to programmatically disclose (expand) a specific node in the tree. If the node to disclose is not a top level node, like a location in a LocationsView -> DepartmentsView -> EmployeesView hierarchy, you need to also disclose the node's parent node hierarchy for application users to see the fully expanded tree node structure. Working on ADF Code Corner sample #101, I wrote the following code lines that show a generic option for disclosing a tree node starting from a handle to the node to disclose. The use case in ADF Coder Corner sample #101 is a drag and drop operation from a table component to a tree to relocate employees to a new department. The tree node that receives the drop is a department node contained in a location. In theory the location could be part of a country and so on to indicate the depth the tree may have. Based on this structure, the code below provides a generic solution to parse the current node parent nodes and its child nodes. The drop event provided a rowKey for the tree node that received the drop. Like in af:table, the tree row key is not of type oracle.jbo.domain.Key but an implementation of java.util.List that contains the row keys. The JUCtrlHierBinding class in the ADF Binding layer that represents the ADF tree binding at runtime provides a method named findNodeByKeyPath that allows you to get a handle to the JUCtrlHierNodeBinding instance that represents a tree node in the binding layer. CollectionModel model = (CollectionModel) your_af_tree_reference.getValue(); JUCtrlHierBinding treeBinding = (JUCtrlHierBinding ) model.getWrappedData(); JUCtrlHierNodeBinding treeDropNode = treeBinding.findNodeByKeyPath(dropRowKey); To disclose the tree node, you need to create a RowKeySet, which you do using the RowKeySetImpl class. Because the RowKeySet replaces any existing row key set in the tree, all other nodes are automatically closed. RowKeySetImpl rksImpl = new RowKeySetImpl(); //the first key to add is the node that received the drop //operation (departments).            rksImpl.add(dropRowKey);    Similar, from the tree binding, the root node can be obtained. The root node is the end of all parent node iteration and therefore important. JUCtrlHierNodeBinding rootNode = treeBinding.getRootNodeBinding(); The following code obtains a reference to the hierarchy of parent nodes until the root node is found. JUCtrlHierNodeBinding dropNodeParent = treeDropNode.getParent(); //walk up the tree to expand all parent nodes while(dropNodeParent != null && dropNodeParent != rootNode){    //add the node's keyPath (remember its a List) to the row key set    rksImpl.add(dropNodeParent.getKeyPath());      dropNodeParent = dropNodeParent.getParent(); } Next, you disclose the drop node immediate child nodes as otherwise all you see is the department node. Its not quite exactly "dinner for one", but the procedure is very similar to the one handling the parent node keys ArrayList<JUCtrlHierNodeBinding> childList = (ArrayList<JUCtrlHierNodeBinding>) treeDropNode.getChildren();                     for(JUCtrlHierNodeBinding nb : childList){   rksImpl.add(nb.getKeyPath()); } Next, the row key set is defined as the disclosed row keys on the tree so when you refresh (PPR) the tree, the new disclosed state shows tree.setDisclosedRowKeys(rksImpl); AdfFacesContext.getCurrentInstance().addPartialTarget(tree.getParent()); The refresh in my use case is on the tree parent component (a layout container), which usually shows the best effect for refreshing the tree component. 

    Read the article

  • How to write loosely coupled classes in node.js

    - by lortabac
    I am trying to understand how to design node.js applications, but it seems there is something I can't grasp about asynchronous programming. Let's say my application needs to access a database. In a synchronous environment I would implement a data access class with a read() method, returning an associative array. In node.js, because code is executed asynchronously, this method can't return a value, so, after execution, it will have to "do" something as a side effect. It will then contain at least 1 line of extraneous code which has nothing to do with data access. Multiply this for all methods and all classes and you will very soon have an unmanageable "code soup". What is the proper way to handle this problem? Am I approaching it the wrong way?

    Read the article

  • Node.js Lockstep Multiplayer Architecture

    - by Wakaka
    Background I'm using the lockstep model for a multiplayer Node.js/Socket.IO game in a client-server architecture. User input (mouse or keypress) is parsed into commands like 'attack' and 'move' on the client, which are sent to the server and scheduled to be executed on a certain tick. This is in contrast to sending state data to clients, which I don't wish to use due to bandwidth issues. Each tick, the server will send the list of commands on that tick (possibly empty) to each client. The server and all clients will then process the commands and simulate that tick in exactly the same way. With Node.js this is actually quite simple due to possibility of code sharing between server and client. I'll just put the deterministic simulator in the /shared folder which can be run by both server and client. The server simulation is required so that there is an authoritative version of the simulation which clients cannot alter. Problem Now, the game has many entity classes, like Unit, Item, Tree etc. Entities are created in the simulator. However, for each class, it has some methods that are shared and some that are client-specific. For instance, the Unit class has addHp method which is shared. It also has methods like getSprite (gets the image of the entity), isVisible (checks if unit can be seen by the client), onDeathInClient (does a bunch of stuff when it dies only on the client like adding announcements) and isMyUnit (quick function to check if the client owns the unit). Up till now, I have been piling all the client functions into the shared Unit class, and adding a this.game.isServer() check when necessary. For instance, when the unit dies, it will call if (!this.game.isServer()) { this.onDeathInClient(); }. This approach has worked pretty fine so far, in terms of functionality. But as the codebase grew bigger, this style of coding seems a little strange. Firstly, the client code is clearly not shared, and yet is placed under the /shared folder. Secondly, client-specific variables for each entity are also instantiated on the server entity (like unit.sprite) and can run into problems when the server cannot instantiate the variable (it doesn't have Image class like on browsers). So my question is, is there a better way to organize the client code, or is this a common way of doing things for lockstep multiplayer games? I can think of a possible workaround, but it does have its own problems. Possible workaround (with problems) I could use Javascript mixins that are only added when in a browser. Thus, in the /shared/unit.js file in the /shared folder, I would have this code at the end: if (typeof exports !== 'undefined') module.exports = Unit; else mixin(Unit, LocalUnit); Then I would have /client/localunit.js store an object LocalUnit of client-side methods for Unit. Now, I already have a publish-subscribe system in place for events in the simulator. To remove the this.game.isServer() checks, I could publish entity-specific events whenever I want the client to do something. For instance, I would do this.publish('Death') in /shared/unit.js and do this.subscribe('Death', this.onDeathInClient) in /client/localunit.js. But this would make the simulator's event listeners list on the server and the client different. Now if I want to clear all subscribed events only from the shared simulator, I can't. Of course, it is possible to create two event subscription systems - one client-specific and one shared - but now the publish() method would have to do if (!this.game.isServer()) { this.publishOnClient(event); }. All in all, the workaround off the top of my head seems pretty complicated for something as simple as separating the client and shared code. Thus, I wonder if there is an established and simpler method for better code organization, hopefully specific to Node.js games.

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Node.js MMO - process and/or map division

    - by Gipsy King
    I am in the phase of designing a mmo browser based game (certainly not massive, but all connected players are in the same universe), and I am struggling with finding a good solution to the problem of distributing players across processes. I'm using node.js with socket.io. I have read this helpful article, but I would like some advice since I am also concerned with different processes. Solution 1: Tie a process to a map location (like a map-cell), connect players to the process corresponding to their location. When a player performs an action, transmit it to all other players in this process. When a player moves away, he will eventually have to connect to another process (automatically). Pros: Easier to implement Cons: Must divide map into zones Player reconnection when moving into a different zone is probably annoying If one zone/process is always busy (has players in it), it doesn't really load-balance, unless I split the zone which may not be always viable There shouldn't be any visible borders Solution 1b: Same as 1, but connect processes of bordering cells, so that players on the other side of the border are visible and such. Maybe even let them interact. Solution 2: Spawn processes on demand, unrelated to a location. Have one special process to keep track of all connected player handles, their location, and the process they're connected to. Then when a player performs an action, the process finds all other nearby players (from the special player-process-location tracking node), and instructs their matching processes to relay the action. Pros: Easy load balancing: spawn more processes Avoids player reconnecting / borders between zones Cons: Harder to implement and test Additional steps of finding players, and relaying event/action to another process If the player-location-process tracking process fails, all other fail too I would like to hear if I'm missing something, or completely off track.

    Read the article

  • Learning node.js

    - by john smith
    I am not sure if this is the right place to ask but, I thought this was the most suitable. I recently graduated from university. Learned the full php stack; basically all the LAMP stuff, obviously without counting all the other subjects. Not even got my degree and this whole node.js booming out of nowhere. You can imagine how one can feel about this, the story is always the same: you never end learning, and studying. So I recently got my hands on node.js; reading books, tutorials, and everything imaginable on the internet. The problem is one and simple: this is nowhere near to having a teacher standing near you helping you understanding and solving your problems, especially when all you can do is post your doubts on a website and patiently wait for replies. It's not that it isn't good, it's just much slower than what I just expressed above. So, in short words: is there a place where one can find someone willing to teach you about such contents? This would obviously done via remote means, like skype and such. Can anyone here point me into the right direction? Or just downvote me for being in the wrong website? Thanks in advance.

    Read the article

  • Compressing/compacting messages over websocket on Node.js

    - by icelava
    We have a websocket implementation (Node.js/Sock.js) that exchanges data as JSON strings. As our use cases grow, so have the size of the data transmitted across the wire. The websocket protocol does not natively offer any compression feature, so in order to reduce the size of our messages we'd have to manually do something about the serialisation. There appear to be a variety of LZW implementations in Javascript, some which confuses me on their compatibility for in-browser use only versus transmission across the wire due to my lack of understanding on low-level encodings. More importantly, all of them seem to take a noticeable performance drag when Javascript is the engine doing the compression/decompression work, which is not desirable for mobile devices. Looking instead other forms of compact serialisation, MessagePack does not appear to have any active support in Javascript itself; BSON does not have any Javascript implementation; and an alternative BISON project that I tested does not deserialise everything back to their original values (large numbers), and it does not look like any further development will happen either. What are some other options others have explored for Node.js?

    Read the article

  • Declarative Transactions in Node.js

    - by James Kingsbery
    Back in the day, it was common to manage database transactions in Java by writing code that did it. Something like this: Transaction tx = session.startTransaction(); ... try { tx.commit(); } catch (SomeException e){ tx.rollback(); } at the beginning and end of every method. This had some obvious problems - it's redundant, hides the intent of what's happening, etc. So, along came annotation-driven transactions: @Transaction public SomeResultObj getResult(...){ ... } Is there any support for declarative transaction management in node.js?

    Read the article

  • Box2Dweb very slow on node.js

    - by Peteris
    I'm using Box2Dweb on node.js. I have a rotated box object that I apply an impulse to move around. The timestep is set at 50ms, however, it bumps up to 100ms and even 200ms as soon as I add any more edges or boxes. Here are the edges I would like to use as bounds around the playing area: // Computing the corners var upLeft = new b2Vec2(0, 0), lowLeft = new b2Vec2(0, height), lowRight = new b2Vec2(width, height), upRight = new b2Vec2(width, 0) // Edges bounding the visible game area var edgeFixDef = new b2FixtureDef edgeFixDef.friction = 0.5 edgeFixDef.restitution = 0.2 edgeFixDef.shape = new b2PolygonShape var edgeBodyDef = new b2BodyDef; edgeBodyDef.type = b2Body.b2_staticBody edgeFixDef.shape.SetAsEdge(upLeft, lowLeft) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) edgeFixDef.shape.SetAsEdge(lowLeft, lowRight) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) edgeFixDef.shape.SetAsEdge(lowRight, upRight) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) edgeFixDef.shape.SetAsEdge(upRight, upLeft) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) Can box2d really become this slow for even two bodies or is there some pitfall? It would be very surprising given all the demos which successfully use tens of objects.

    Read the article

  • Would I benefit changing from PHP to Node.js (in context)

    - by danneth
    The situation: We are about to roll out what is essentially a logging service. As we are rather PHP heavy, the current implementation use it. We will have about 200 computers (most on the same network) that will each send, via HTTP POST, around 5000 requests/day. With each request containing about 300 bytes of data. The receiving end is hosted at Amazon and is a very simple PHP form with some simple validation that puts everything in a database. Now, I've recently been introduced to Node.js and I'm curious as to if it would be a good fit for the backend here. Granted I could easily build something to test this. But since I haven't fully grasped the async-methology I would really like someone with experience to explain it to me.

    Read the article

  • Node Serialization in NetBeans Platform 7.0

    - by Geertjan
    Node serialization makes sense when you're not interested in the data (since that should be serialized to a database), but in the state of the application. For example, when the application restarts, you want the last selected node to automatically be selected again. That's not the kind of information you'll want to store in a database, hence node serialization is not about data serialization but about application state serialization. I've written about this topic in October 2008, here and here, but want to show how to do this again, using NetBeans Platform 7.0. Somewhere I remember reading that this can't be done anymore and that's typically the best motivation for me, i.e., to prove that it can be done after all. Anyway, in a standard POJO/Node/BeanTreeView scenario, do the following: Remove the "@ConvertAsProperties" annotation at the top of the class, which you'll find there if you used the Window Component wizard. We're not going to use property-file based serialization, but plain old java.io.Serializable  instead. In the TopComponent, assuming it is named "UserExplorerTopComponent", typically at the end of the file, add the following: @Override public Object writeReplace() { //We want to work with one selected item only //and thanks to BeanTreeView.setSelectionMode, //only one node can be selected anyway: Handle handle = NodeOp.toHandles(em.getSelectedNodes())[0]; return new ResolvableHelper(handle); } public final static class ResolvableHelper implements Serializable { private static final long serialVersionUID = 1L; public Handle selectedHandle; private ResolvableHelper(Handle selectedHandle) { this.selectedHandle = selectedHandle; } public Object readResolve() { WindowManager.getDefault().invokeWhenUIReady(new Runnable() { @Override public void run() { try { //Get the TopComponent: UserExplorerTopComponent tc = (UserExplorerTopComponent) WindowManager.getDefault().findTopComponent("UserExplorerTopComponent"); //Get the display text to search for: String selectedDisplayName = selectedHandle.getNode().getDisplayName(); //Get the root, which is the parent of the node we want: Node root = tc.getExplorerManager().getRootContext(); //Find the node, by passing in the root with the display text: Node selectedNode = NodeOp.findPath(root, new String[]{selectedDisplayName}); //Set the explorer manager's selected node: tc.getExplorerManager().setSelectedNodes(new Node[]{selectedNode}); } catch (PropertyVetoException ex) { Exceptions.printStackTrace(ex); } catch (IOException ex) { Exceptions.printStackTrace(ex); } } }); return null; } } Assuming you have a node named "UserNode" for a type named "User" containing a property named "type", add the bits in bold below to your "UserNode": public class UserNode extends AbstractNode implements Serializable { static final long serialVersionUID = 1L; public UserNode(User key) { super(Children.LEAF); setName(key.getType()); } @Override public Handle getHandle() { return new CustomHandle(this, getName()); } public class CustomHandle implements Node.Handle { static final long serialVersionUID = 1L; private AbstractNode node = null; private final String searchString; public CustomHandle(AbstractNode node, String searchString) { this.node = node; this.searchString = searchString; } @Override public Node getNode() { node.setName(searchString); return node; } } } Run the application and select one of the user nodes. Close the application. Start it up again. The user node is not automatically selected, in fact, the window does not open, and you will see this in the output: Caused: java.io.InvalidClassException: org.serialization.sample.UserNode; no valid constructor Read this article and then you'll understand the need for this class: public class BaseNode extends AbstractNode { public BaseNode() { super(Children.LEAF); } public BaseNode(Children kids) { super(kids); } public BaseNode(Children kids, Lookup lkp) { super(kids, lkp); } } Now, instead of extending AbstractNode in your UserNode, extend BaseNode. Then the first non-serializable superclass of the UserNode has an explicitly declared no-args constructor, Do the same as the above for each node in the hierarchy that needs to be serialized. If you have multiple nodes needing serialization, you can share the "CustomHandle" inner class above between all the other nodes, while all the other nodes will also need to extend BaseNode (or provide their own non-serializable super class that explicitly declares a no-args constructor). Now, when I run the application, I select a node, then I close the application, restart it, and the previously selected node is automatically selected when the application has restarted.

    Read the article

  • Node.JS testing with Jasmine, databases, and pre-existing code

    - by Jim Rubenstein
    I've recently built the start of a core system which is likely going turn into a monster product. I'm building the system with node.js, and decided after I got a small base built, that It'd be a great idea to start using some sort of automated test suite to test the application. I decided to use jasmine, as it seems pretty solid and has a lot of features for stubbing spying and mocking methods and classes. The application has a lot of external data stores and api access (kestrel, mysql, mongodb, facebook, and more). My issue is, I've got a good amount of code written that I want to start testing - as it represents the underpinnings of the application. What are the best practices for testing methods/classes that access external APIs that I may or may not have control over? As an example, I have a data structure that fetches a bunch of data from a MySQL database. I want to test the method that retrieves the data; and I'm not sure how to go about it. I could test the fetch method which is supposed to return an array of objects, but to isolate the method from the database, I need to define my own fixture data. So what I end up doing is stubbing the mysql execution, and returning a static dataset. So, I end up writing a function that returns the dataset that makes my test pass. That doesn't seem to actually test the code, other than verifying a method is being called. I know this is kind of abstract and vague, it seems that the idea of testing is very much abstract though, so hopefully someone has some experience and can guide me in the right direction. Any advice, or reading I can do is more than welcomed. Thanks in advance.

    Read the article

  • Dealing with the node.js callback pyramid

    - by thecoop
    I've just started using node, and one thing I've quickly noticed is how quickly callbacks can build up to a silly level of indentation: doStuff(arg1, arg2, function(err, result) { doMoreStuff(arg3, arg4, function(err, result) { doEvenMoreStuff(arg5, arg6, function(err, result) { omgHowDidIGetHere(); }); }); }); The official style guide says to put each callback in a separate function, but that seems overly restrictive on the use of closures, and making a single object declared in the top level available several layers down, as the object has to be passed through all the intermediate callbacks. Is it ok to use function scope to help here? Put all the callback functions that need access to a global-ish object inside a function that declares that object, so it goes into a closure? function topLevelFunction(globalishObject, callback) { function doMoreStuffImpl(err, result) { doMoreStuff(arg5, arg6, function(err, result) { callback(null, globalishObject); }); } doStuff(arg1, arg2, doMoreStuffImpl); } and so on for several more layers... Or are there frameworks etc to help reduce the levels of indentation without declaring a named function for every single callback? How do you deal with the callback pyramid?

    Read the article

  • Dealing with the node callback pyramid

    - by thecoop
    I've just started using node, and one thing I've quickly noticed is how quickly callbacks can build up to a silly level of indentation: doStuff(arg1, arg2, function(err, result) { doMoreStuff(arg3, arg4, function(err, result) { doEvenMoreStuff(arg5, arg6, function(err, result) { omgHowDidIGetHere(); }); }); }); The official style guide says to put each callback in a separate function, but that seems overly restrictive on the use of closures, and making a single object declared in the top level available several layers down, as the object has to be passed through all the intermediate callbacks. Is it ok to use function scope to help here? Put all the callback functions that need access to a global-ish object inside a function that declares that object, so it goes into a closure? function topLevelFunction(globalishObject, callback) { function doMoreStuffImpl(err, result) { doMoreStuff(arg5, arg6, function(err, result) { callback(null, globalishObject); }); } doStuff(arg1, arg2, doMoreStuffImpl); } and so on for several more layers... Or are there frameworks etc to help reduce the levels of indentation without declaring a named function for every single callback? How do you deal with the callback pyramid?

    Read the article

  • Node.js, Nginx and Varnish with WebSockets

    - by Joe S
    I'm in the process of architecting the backend of a new Node.js web app that i'd like to be pretty scalable, but not overkill. In all of my previous Node.js deployments, I have used Nginx to serve static assets such as JS/CSS and reverse proxy to Node (As i've heard Nginx does a much better job of this / express is not really production ready). However, Nginx does not support WebSockets. I am making extensive use of Socket.IO for the first time and discovered many articles detailing this limitation. Most of them suggest using Varnish to direct the WebSockets traffic directly to node, bypassing Nginx. This is my current setup: Varnish : Port 80 - Routing HTTP requests to Nginx and WebSockets directly to node Nginx : Port 8080 - Serving Static Assets like CSS/JS Node.js Express: Port 3000 - Serving the App, over HTTP + WebSockets However, there is now the added complexity that Varnish doesn't support HTTPS, which requires Stunnel or some other solution, it's also not load balanced yet (Perhaps i will use HAProxy or something). The complexity is stacking up! I would like to keep things simpler than this if possible. Is it still necessary to reverse proxy Node.js using Nginx when Varnish is also present? As even if express is slow at serving static files, they should theoretically be cached by Varnish. Or are there better ways to implement this?

    Read the article

  • Node.js Build failed: -> task failed (error#2)?

    - by Richard Hedges
    I'm trying to install Node.js on my CentOS server. I run ./configure and it runs perfectly fine. I then run the 'make' command and it produces the following: [5/38] libv8.a: deps/v8/SConstruct - out/Release/libv8.a /usr/local/bin/python "/root/node/tools/scons/scons.py" -j 1 -C "/root/node/out/Release/" -Y "/root/node/deps/v8" visibility=default mode=release arch=ia32 toolchain=gcc library=static snapshot=on scons: Reading SConscript files ... ImportError: No module named bz2: File "/root/node/deps/v8/SConstruct", line 37: import js2c, utils File "/root/node/deps/v8/tools/js2c.py", line 36: import bz2 Waf: Leaving directory `/root/node/out' Build failed: - task failed (err #2): {task: libv8.a SConstruct - libv8.a} make: * [program] Error 1 I've done some searching on Google but I can't seem to find anything to help. Most of what I've found is for Cygwin anyway, and I'm on CentOS 4.9. Like I said, the ./configure went through perfectly fine with no errors, so there's nothing there that I can see. EDIT I've got a little further. Now I just need to upgrade G++ to version 4 (or higher). I tried yum update gcc but no luck, so I tried yum install gcc44, which resulted in no luck either. Has anyone got any ideas as to how I can update G++?

    Read the article

  • How to display image in html image tag - node.js [on hold]

    - by ykel
    I use the following code to store image to file system and to retrieve the image, I would like to display the retrieved image on html image tag, hower the image is rendered on the response page but not on the html image tag. here is my html image tag: <img src="/show"> THIS CODE RETREIVES THE IMAGE: app.get('/show', function (req, res) { var FilePath=__dirname+"/uploads/3562_564927103528411_1723183324_n.jpg"; fs.readFile(FilePath,function(err,data){ if(err)throw err; console.log(data); res.writeHead(200, {'Content-Type': 'image/jpeg'}); res.end(data); // Send the file data to the browser. }) });

    Read the article

  • A* PathFinding Poor Performance

    - by RedShft
    After debugging for a few hours, the algorithm seems to be working. Right now to check if it works i'm checking the end node position to the currentNode position when the while loop quits. So far the values look correct. The problem is, the farther I get from the NPC, who is current stationary, the worse the performance gets. It gets to a point where the game is unplayable less than 10 fps. My current PathGraph is 2500 nodes, which I believe is pretty small, right? Any ideas on how to improve performance? struct Node { bool walkable; //Whether this node is blocked or open vect2 position; //The tile's position on the map in pixels int xIndex, yIndex; //The index values of the tile in the array Node*[4] connections; //An array of pointers to nodes this current node connects to Node* parent; int gScore; int hScore; int fScore; } class AStar { private: SList!Node openList; SList!Node closedList; //Node*[4] connections; //The connections of the current node; Node currentNode; //The current node being processed Node[] Path; //The path found; const int connectionCost = 10; Node start, end; ////////////////////////////////////////////////////////// void AddToList(ref SList!Node list, ref Node node ) { list.insert( node ); } void RemoveFrom(ref SList!Node list, ref Node node ) { foreach( elem; list ) { if( node.xIndex == elem.xIndex && node.yIndex == elem.yIndex ) { auto a = find( list[] , elem ); list.linearRemove( take(a, 1 ) ); } } } bool IsInList( SList!Node list, ref Node node ) { foreach( elem; list ) { if( node.xIndex == elem.xIndex && node.yIndex == elem.yIndex ) return true; } return false; } void ClearList( SList!Node list ) { list.clear; } void SetParentNode( ref Node parent, ref Node child ) { child.parent = &parent; } void SetStartAndEndNode( vect2 vStart, vect2 vEnd, Node[] PathGraph ) { int startXIndex, startYIndex; int endXIndex, endYIndex; startXIndex = cast(int)( vStart.x / 32 ); startYIndex = cast(int)( vStart.y / 32 ); endXIndex = cast(int)( vEnd.x / 32 ); endYIndex = cast(int)( vEnd.y / 32 ); foreach( node; PathGraph ) { if( node.xIndex == startXIndex && node.yIndex == startYIndex ) { start = node; } if( node.xIndex == endXIndex && node.yIndex == endYIndex ) { end = node; } } } void SetStartScores( ref Node start ) { start.gScore = 0; start.hScore = CalculateHScore( start, end ); start.fScore = CalculateFScore( start ); } Node GetLowestFScore() { Node lowest; lowest.fScore = 10000; foreach( elem; openList ) { if( elem.fScore < lowest.fScore ) lowest = elem; } return lowest; } //This function current sets the program into an infinite loop //I still need to debug to figure out why the parent nodes aren't correct void GeneratePath() { while( currentNode.position != start.position ) { Path ~= currentNode; currentNode = *currentNode.parent; } } void ReversePath() { Node[] temp; for(int i = Path.length - 1; i >= 0; i-- ) { temp ~= Path[i]; } Path = temp.dup; } public: //@FIXME It seems to find the path, but now performance is terrible void FindPath( vect2 vStart, vect2 vEnd, Node[] PathGraph ) { openList.clear; closedList.clear; SetStartAndEndNode( vStart, vEnd, PathGraph ); SetStartScores( start ); AddToList( openList, start ); while( currentNode.position != end.position ) { currentNode = GetLowestFScore(); if( currentNode.position == end.position ) break; else { RemoveFrom( openList, currentNode ); AddToList( closedList, currentNode ); for( int i = 0; i < currentNode.connections.length; i++ ) { if( currentNode.connections[i] is null ) continue; else { if( IsInList( closedList, *currentNode.connections[i] ) && currentNode.gScore < currentNode.connections[i].gScore ) { currentNode.connections[i].gScore = currentNode.gScore + connectionCost; currentNode.connections[i].hScore = abs( currentNode.connections[i].xIndex - end.xIndex ) + abs( currentNode.connections[i].yIndex - end.yIndex ); currentNode.connections[i].fScore = currentNode.connections[i].gScore + currentNode.connections[i].hScore; currentNode.connections[i].parent = &currentNode; } else if( IsInList( openList, *currentNode.connections[i] ) && currentNode.gScore < currentNode.connections[i].gScore ) { currentNode.connections[i].gScore = currentNode.gScore + connectionCost; currentNode.connections[i].hScore = abs( currentNode.connections[i].xIndex - end.xIndex ) + abs( currentNode.connections[i].yIndex - end.yIndex ); currentNode.connections[i].fScore = currentNode.connections[i].gScore + currentNode.connections[i].hScore; currentNode.connections[i].parent = &currentNode; } else { currentNode.connections[i].gScore = currentNode.gScore + connectionCost; currentNode.connections[i].hScore = abs( currentNode.connections[i].xIndex - end.xIndex ) + abs( currentNode.connections[i].yIndex - end.yIndex ); currentNode.connections[i].fScore = currentNode.connections[i].gScore + currentNode.connections[i].hScore; currentNode.connections[i].parent = &currentNode; AddToList( openList, *currentNode.connections[i] ); } } } } } writeln( "Current Node Position: ", currentNode.position ); writeln( "End Node Position: ", end.position ); if( currentNode.position == end.position ) { writeln( "Current Node Parent: ", currentNode.parent ); //GeneratePath(); //ReversePath(); } } Node[] GetPath() { return Path; } } This is my first attempt at A* so any help would be greatly appreciated.

    Read the article

  • 250k connections for comet with node.js

    - by Nenad
    How to implement node.js to be able to handle 250k connections as comet server (client side we use socket.io)? Would the use of nginx as proxy/loadbalancer be the right solution? Or will HA-Proxy be the better way? Has anyone real world experience with 100k+ connections and can share his setup? Would a setup like this be the right one (Quad core CPU per server - start 4 Instances of node.js per Server?): nginx (as proxy / load balancing server) / | \ / | \ / | \ / | \ node server #1 node server #2 node server #3 4 instances 4 instances 4 instances

    Read the article

  • Best way to deploy my node.js app on a Varnish/Nginx server

    - by Saif Bechan
    I am about to deploy a brand new node.js application, and I need some help setting this up. The way my setup is right now is as follows. I have Varnish running on external_ip:80 I have Nginx behind running on internal_ip:80 Both are listening on port 80, one internal port, one external. NOTE: the node.js app runs on WebSockets Now I have the my new node.js application that will listen on port 8080. Can I have varnish set up that it is in front of both nginx and node.js. Varnish has to proxy the websocket to port 8080, but then the static files such as css, js, etc has to go trough port 80 to nignx. Nginx does not support websockets out of the box, else I would so a setup like: varnish - nignx - node.js

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Node remains in commissioning status

    - by Vinitha
    I have been trying to set up ubuntu cloud 12.04. I'm kind of new to MAAS and ubuntu. Here is what I followed. Have installed MAAS server using the steps provided in https://wiki.ubuntu.com/ServerTeam/MAAS For the node, I installed the Ubuntu 12.04 Server Image on a USB Stick. Then restarted the node and opted to enlist the node via boot media, with PXE. once the process was done, the node was powered off as expected. I manually powered on the node, as my node is not PXE enabled. Result - No node was visible on MAAS UI Since step 2 didn't work, I added the node via maas-cli. command. After the execution of this command I got the node reflected on to my MAAS UI. But the status continues to be in "Commissioning" for a long time. Then I executed "maas-cli maas nodes check-commissioning " and i got "Unrecognised signature: POST check_commissioning". I'm not sure where is the error. Could some one please help me solve this issue. I checked the following log file but found no error related to commissioning (pserv.log / maas.log / celery.log/celery-region.log). I found this entry in my auth.log "Nov 16 18:20:34 ubuntuCloud sshd[4222]: Did not receive identification string from xxx.xx.xx.x" not sure if it indicates anything as the ip that is mentioned is not of the node nor of the MAAS server. I also verified the time on the server and node using date cmd - (at one instance the times are : Server: Fri Nov 16 18:15:51 IST 2012 and Node Fri Nov 16 18:15:43 IST 2012). Not sure if 'date' the right cmd to set the time. I have also check maas_local_settings.py for the MAAS url. I'm not sure what are the logs that need to be verified. Is there any log that can be checked on the Node. Thanks Vinitha

    Read the article

  • Node.js/ v8: How to make my own snapshot to accelerate startup

    - by Anand
    I have a node.js (v0.6.12) application that starts by evaluating a Javascript file, startup.js. It takes a long time to evaluate startup.js, and I'd like to 'bake it in' to a custom build of Node if possible. The v8 source directory distributed with Node, node/deps/v8/src, contains a SconScript that can almost be used to do this. On line 302, we have LIBRARY_FILES = ''' runtime.js v8natives.js array.js string.js uri.js math.js messages.js apinatives.js date.js regexp.js json.js liveedit-debugger.js mirror-debugger.js debug-debugger.js '''.split() Those javascript files are present in the same directory. Something in the build process apparently evaluates them, takes a snapshot of state, and saves it as a byte string in node/out/Release/obj/release/snapshot.cc (on Mac OS). Some customization of the startup snapshot is possible by altering the SconScript. For example, I can change the definition of the builtin Date.toString by altering date.js. I can even add new global variables by adding startup.js to the list of library files, with contents global.test = 1. However, I can't put just any javascript code in startup.js. If it contains Date.toString = 1;, an error results even though the code is valid at the node repl: Build failed: -> task failed (err #2): {task: libv8.a SConstruct -> libv8.a} make: *** [program] Error 1 And it obviously can't make use of code that depends on libraries Node adds to v8. global.underscore = require('underscore'); causes the same error. I'd ideally like a tool, customSnapshot, where customSnapshot startup.js evaluates startup.js with node and then dumps a snapshot to a file, snapshot.cc, which I can put into the node source directory. I can then build node and tell it not to rebuild the snapshot.

    Read the article

  • .NET TreeView causes application to crash when trying to check Parent node

    - by alexD
    I have a TreeView with check boxes, and when a user checks a child node, I want to go up the tree and check each parent. However, for some reason my application blows up whenever I touch the parent node. For the nodes in my tree, I've extended TreeNode to create my own objects with some data that I need to store in them, but I still reference them as TreeNodes when checking/unchecking. My code looks like this: //checkBox checked event handler if (node.Parent != null) { checkAllParents(node.Parent); } // private void checkAllParents(TreeNode node) { node.Checked = true; if (node.Parent != null) { checkAllParents(node.Parent); } }

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >