Search Results

Search found 32290 results on 1292 pages for 'array key'.

Page 242/1292 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Specified key is not a valid size for this algorithm...

    - by phenevo
    Hi, I have with this code: RijndaelManaged rijndaelCipher = new RijndaelManaged(); // Set key and IV rijndaelCipher.Key = Convert.FromBase64String("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz012345678912"); rijndaelCipher.IV = Convert.FromBase64String("1234567890123456789012345678901234567890123456789012345678901234"); I get throws : Specified key is not a valid size for this algorithm. Specified initialization vector (IV) does not match the block size for this algorithm. What's wrong with this strings ? Can I count at some examples strings from You ?

    Read the article

  • Pseudocode: a clear definition?

    - by Cian E
    The following code is an example of what I think would qualify as pseudocode, since it does not execute in any language but the logic is correct. string checkRubric(gpa, major) bool brake = false num lastRange num rangeCounter string assignment = "unassigned" array bus['business']= array('person a'=>array(0, 2.9), 'person b'=>array(3, 4)) array cis['computer science']= array('person c'=>array(0, 2.9), 'person d'=>array(3, 4)) array lib['english']= array('person e'=>array(0, 4)) array rubric = array(bus, cis, lib) foreach (rubric as fieldAr) foreach (fieldAr as field => advisorAr) if (major == field) foreach (advisorAr as advisor => gpaRangeAr) rangeCounter = 0 foreach (gpaRangeAr as gpaValue) if (rangeCounter < 1) lastRange = gpaValue else if (gpa >= lastRange && gpa <= gpaValue) assignment = advisor brake = true break endif rangeCounter++ endforeach if (brake == true) break endif endforeach if (brake == true) break endif endif endforeach if (brake == true) break endif endforeach return assignment For the past couple of weeks I've been trying to create a clear definition of what pseudocode actually is. Is it relative to the programmer or is there an actual clearcut syntax? I say pseudocode is any code that does not execute, how about you? Thanks (links to this subject welcome)

    Read the article

  • How to dismiss keyboard for UITextView with return key?

    - by iPhoney
    In IB's library, the introduction tells us that when the return key is pressed, the keyboard for UITextView will disappear. But actually the return key can only act as '\n'. I can add a button and use [txtView resignFirstResponder] to hide the keyboard. But is there a way to add the action for the return key in keyboard so that I needn't add another button.

    Read the article

  • How do I insert into a unique key into a table?

    - by Ben McCormack
    I want to insert data into a table where I don't know the next unique key that I need. I'm not sure how to format my INSERT query so that the value of the Key field is 1 greater than the maximum value for the key in the table. I know this is a hack, but I'm just running a quick test against a database and need to make sure I always send over a Unique key. Here's the SQL I have so far: INSERT INTO [CMS2000].[dbo].[aDataTypesTest] ([KeyFld] ,[Int1]) VALUES ((SELECT Max([KeyFld]) FROM [dbo].[aDataTypesTest]) + 1 ,1) which errors out with: Msg 1046, Level 15, State 1, Line 5 Subqueries are not allowed in this context. Only scalar expressions are allowed. I'm not able to modify the underlying database table. What do I need to do to ensure a unique insert in my INSERT SQL code?

    Read the article

  • Dilemma of Sending a Byte Array from JavaScript to COM.

    - by Voulnet
    Hello there, I'm having a bit of a problem because neither Javascript nor ActiveX (written in C++) are behaving like good little children. All I'm asking them to do is for Javascript to send a byte array and for the ActiveX to receive the byte array correctly in order to do more computation. This is how I declared my byte array in JS, and I verified it inside JS: var arr = new Array(0x00, 0xA4, 0x04, 0x00, 0x10,0xA0, 0x00, 00, 00, 0x18, 0x30, 0x03, 0x01, 00, 00, 00, 00, 00, 00, 00, 00); Javascript sends this array as an argument to an ActiveX method. Here is the tricky part; I want the ActiveX method to receive the byte array as a SAFEARRAY or VARIANT, but I can't get it to work for the life of me. I tried debugging and seeing the contents received inside the ActiveX as both SAFEARRAY or VARIANT, but to no avail. Here is the IDL segment: [id(7), helpstring("NEW Sends APDU to Card and Reads the Response")] HRESULT NewSendAPDU( [in] VARIANT pAPDU, [out, retval]VARIANT* pAPDUResponse); Any help is greatly appreciated. Thanks in advance!

    Read the article

  • Alternative to array_shift function

    - by SoLoGHoST
    Ok, I need keys to be preserved within this array and I just want to shift the 1st element from this array. Actually I know that the first key of this array will always be 1 when I do this: // Sort it by 1st group and 1st layout. ksort($disabled_sections); foreach($disabled_sections as &$grouplayout) ksort($grouplayout); Basically I'd rather not have to ksort it in order to grab this array where the key = 1. And, honestly, I'm not a big fan of array_shift, it just takes to long IMO. Is there another way. Perhaps a way to extract the entire array where $disabled_sections[1] is found without having to do a foreach and sorting it, and array_shift. I just wanna add $disabled[1] to a different array and remove it from this array altogether. While keeping both arrays keys structured the way they are. Technically, it would even be fine to do this: $array = array(); $array = $disabled_sections[1]; But it needs to remove it from $disabled_sections. Can I use something like this approach... $array = array(); $array = $disabled_sections[1]; $disabled_sections -= $disabled_sections[1]; Is something like the above even possible?? Thanks.

    Read the article

  • How to sort a Map<Key, Value> on the values in Java?

    - by Abe
    I am relatively new to Java, and often find that I need to sort a Map on the values. Since the values are not unique, I find myself converting the keySet into an array, and sorting that array through array sort with a custom comparator that sorts on the value associated with the key. Is there an easier way?

    Read the article

  • Cannot truncate table because it is being referenced by a FOREIGN KEY constraint?

    - by ctrlShiftBryan
    Using MSSQL2005, Can I truncate a table with a foreign key constraint if I first truncate the child table(the table with the primary key of the FK relationship)? I know I can use a DELETE without a where clause and then RESEED the identity OR Remove the FK, truncate and recreate but I thought as long as you truncate the child table you'll be OK however I'm getting a "Cannot truncate table 'TableName' because it is being referenced by a FOREIGN KEY constraint." error.

    Read the article

  • What key on a keyboard can be detected in the browser but won't show up in a text input?

    - by Brady
    I know this one is going to sound weird but hear me out. I have a heart rate monitor that is hooked up like a keyboard to the computer. Each time the heart rate monitor detects a pulse it sends a key stroke. This keystroke is then detected by a web based game running in the browser. Currently I'm sending the following keystroke: (`) In the browser base game I'm detecting the following key fine and if when the user is on any data input screens the (`) character is ignored using a bit of JavaScript. The problem comes when I leave the browser and go back to using the Operating system in other ways the (`) starts appearing everywhere. So my question is: Is there a key that can be sent via the keyboard that is detectable in the browser but wont have any notable output on the screen if I switch to other applications. I'm kind of looking for a null key. A key that is detectable in the browser but has no effect to the browser or any other system application if pressed.

    Read the article

  • With VIM, use both snipMate and pydiction together (share the <tab> key?)

    - by thornomad
    I am trying to use snipMate and pydiction in vim together - however, both use the <tab> key to perform their genius-auto-completion-snippet-rendering-goodness-that-I-so-desire. When pydiction is installed, snipMate stops working. I assume its because they can't both own the <tab> key. How can I get them to work together? I wouldn't mind mapping one of them to a different key, but I am not really sure how to do this ... (maybe pydiction to the <ctrl-n> key so it mimics vim's autocomplete?). Here is the relevant .vimrc: filetype indent plugin on autocmd FileType python set ft=python.django autocmd FileType html set ft=html.django_template let g:pydiction_location = '~/.vim/ftplugin/pydiction-1.2/complete-dict'

    Read the article

  • Best practice for writing ARRAYS

    - by Douglas
    I've got an array with about 250 entries in it, each their own array of values. Each entry is a point on a map, and each array holds info for: name, another array for points this point can connect to, latitude, longitude, short for of name, a boolean, and another boolean The array has been written by another developer in my team, and he has written it as such: names[0]=new Array; names[0][0]="Campus Ice Centre"; names[0][1]= new Array(0,1,2); names[0][2]=43.95081811364498; names[0][3]=-78.89848709106445; names[0][4]="CIC"; names[0][5]=false; names[0][6]=false; names[1]=new Array; names[1][0]="Shagwell's"; names[1][1]= new Array(0,1); names[1][2]=43.95090307839151; names[1][3]=-78.89815986156464; names[1][4]="shg"; names[1][5]=false; names[1][6]=false; Where I would probably have personally written it like this: var names = [] names[0] = new Array("Campus Ice Centre", new Array[0,1,2], 43.95081811364498, -78.89848709106445, "CIC", false, false); names[1] = new Array("Shagwell's", new Array[0,1], 43.95090307839151, -78.89815986156464, 'shg", false, false); They both work perfectly fine of course, but what I'm wondering is: 1) does one take longer than the other to actually process? 2) am I incorrect in assuming there is a benefit to the compactness of my version of the same thing? I'm just a little worried about his 3000 lines of code versus my 3-400 to get the same result. Thanks in advance for any guidance.

    Read the article

  • Strange Bubble sort behaviour.

    - by user271528
    Can anyone explain why this bubble sort function doesn't work and why I loose number in my output. I'm very new to C, so please forgive me if this is something very obvious I have missed. #include <stdio.h> #include int bubble(int array[],int length) { int i, j; int temp; for(i = 0; i < (length); ++i) { for(j = 0; j < (length - 1); ++j) { if(array[i] array[i+1]) { temp = array[i+1]; array[i+1] = array[i]; array[i] = temp; } } } return 0; } int main() { int array[] = {12,234,3452,5643,0}; int i; int length; length = (sizeof(array)/sizeof(int)); printf("Size of array = %d\n", length); bubble(array, length); for (i = 0; i < (length); ++i) { printf("%d\n", array[i]); } return 0; } Output Size of array = 5 12 234 3452 0 0

    Read the article

  • Using Hadoop, are my reducers guaranteed to get all the records with the same key?

    - by samg
    I'm running a hadoop job (using hive actually) which is supposed to uniq lines in a lot of text file. More specifically it chooses the most recently timestamped record for each key in the reduce step. Does hadoop guarantee that every record with the same key, output by the map step, will go to a single reducer, even if there are many reducers running across a cluster? I'm worried that the mapper output might be split after the shuffle happens, in the middle of a set of records with the same key.

    Read the article

  • Specify private SSH-key to use when executing shell command with or without Ruby?

    - by Christoffer
    A rather unusual situation perhaps, but I want to specify a private SSH-key to use when executing a shell (git) command from the local computer. Basically like this: git clone [email protected]:TheUser/TheProject.git -key "/home/christoffer/ssh_keys/theuser" Or even better (in Ruby): with_key("/home/christoffer/ssh_keys/theuser") do sh("git clone [email protected]:TheUser/TheProject.git") end I have seen examples of connecting to a remote server with Net::SSH that uses a specified private key, but this is a local command. Is it possible? Thanks

    Read the article

  • PHP Replace chars by functions

    - by user1860570
    I try change characters by functions <?php $string = "Hi everybody people [gal~images/articles~100~100~4] here other imagen [gal~images/products~100~100~3]"; $regex = "/\[(.*?)\]/"; preg_match_all($regex, $string, $matches); for($i=0; $i<count($matches[1]);$i++) { $match = $matches[1][$i]; $array = explode('~', $match); //$newValuet="gal("".$array[1]."","".$array[2]."","".$array[3]."","".$array[4]."")"; $newValue="gal(".$array[1].",".$array[2].",".$array[3].",".$array[4].")"; $string = str_replace($matches[0][$i],$newValue,$string); } echo $string; ?> The problem here : $newValue="gal(".$array[1].",".$array[2].",".$array[3].",".$array[4].")"; $string = str_replace($matches[0][$i],$newValue,$string); Function no give the right results i try differents methods but continue the problems , please i see all functions but no get this works if you can answer please put me some modification of this code for i can understand , thank´s a lot for all help

    Read the article

  • How do I mock a method with an open array parameter in PascalMock?

    - by Oliver Giesen
    I'm currently in the process of getting started with unit testing and mocking for good and I stumbled over the following method that I can't seem to fabricate a working mock implementation for: function GetInstance(const AIID: TGUID; out AInstance; const AArgs: array of const; const AContextID: TImplContextID = CID_DEFAULT): Boolean; (TImplContextID is just an alias for Integer) I thought it would have to look something like this: function TImplementationProviderMock.GetInstance( const AIID: TGUID; out AInstance; const AArgs: array of const; const AContextID: TImplContextID): Boolean; begin Result := AddCall('GetInstance') .WithParams([@AIID, AContextID]) .ReturnsOutParams([AInstance]) .ReturnValue; end; But the compiler complains about the .ReturnsOutParams([AInstance]) saying "Bad argument type in variable type array constructor.". Also I haven't found a way to specify the open array parameter AArgs at all. Also, is using the @-notation for the TGUID-typed parameter the right way to go? Is it possible to mock this method with the current version of PascalMock at all? Update: I now realize I got the purpose of ReturnsOutParams completely wrong: It's intended to be used for populating the values to be returned when defining the expectations rather than for mocking the call itself. I now think the correct syntax for mocking the out parameter would probably have to look more like this: function TImplementationProviderMock.GetInstance( const AIID: TGUID; out AInstance; const AArgs: array of const; const AContextID: TImplContextID): Boolean; var lCall: TMockMethod; begin lCall := AddCall('GetInstance').WithParams([@AIID, AContextID]); Pointer(AInstance) := lCall.OutParams[0]; Result := lCall.ReturnValue; end; The questions that remain are how to mock the open array parameter AArgs and whether passing the TGUID argument (i.e. a value type) by address will work out...

    Read the article

  • ssh-rsa public key validation using a regular expression.

    - by Warlax
    What regular expression can I use (if any) to validate that a given string is a legal ssh rsa public key? I only need to validate the actual key - I don't care about the key type the precedes it or the username comment after it. Ideally, someone will also provide the python code to run the regex validation. Thanks.

    Read the article

  • Facebook Stream.Get -- How to access data in array?

    - by user316841
    On stream.get, I try to echo $feeds["posts"][$i]["attachment"]["href"]; It return the URL, but, in the same array scope where "type" is located (which returns string: video, etc), trying $feeds["posts"][$i]["attachment"]["type"] returns nothing at all! Here's an array through PHP's var_dump: http://pastie.org/930475 So, from testing I suppose this is protected by Facebook? Does that makes sense at all? Here it's full: http://pastie.org/930490, but not all attachment/media/types has values. It's also strange, because I can't access through [attachment][media][href] or [attachment][media][type], and if I try [attachment][media][0][type] or href, it gives me a string offset error. ["attachment"]=> array(8) { ["media"]=> array(1) { [0]=> array(5) { ["href"]=> string(55) "http://www.facebook.com/video/video.php?v=1392999461587" ["alt"]=> string(13) "IN THE STUDIO" ["type"]=> string(5) "video" My question is, is this protected by Facebook? Or we can actually access this array position?

    Read the article

  • Ruby: Why is Array.sort slow for large objects?

    - by David Waller
    A colleague needed to sort an array of ActiveRecord objects in a Rails app. He tried the obvious Array.sort! but it seemed surprisingly slow, taking 32s for an array of 3700 objects. So just in case it was these big fat objects slowing things down, he reimplemented the sort by sorting an array of small objects, then reordering the original array of ActiveRecord objects to match - as shown in the code below. Tada! The sort now takes 700ms. That really surprised me. Does Ruby's sort method end up copying objects about the place rather than just references? He's using Ruby 1.8.6/7. def self.sort_events(events) event_sorters = Array.new(events.length) {|i| EventSorter.new(i, events[i])} event_sorters.sort! event_sorters.collect {|es| events[es.index]} end private # Class used by sort_events class EventSorter attr_reader :sqn attr_reader :time attr_reader :index def initialize(index, event) @index = index @sqn = event.sqn @time = event.time end def <=>(b) @time != b.time ? @time <=> b.time : @sqn <=> b.sqn end end

    Read the article

  • Sorted array: how to get position before and after using name? as3

    - by user1560239
    I have been working on a project and Stack Overflow has helped me with a few problems so far, so I am very thankful! My question is this: I have an array like this: var records:Object = {}; var arr:Array = [ records["nh"] = { medinc:66303, statename:"New Hampshire"}, records["ct"] = { medinc:65958, statename:"Connecticut"}, records["nj"] = { medinc:65173, statename:"New Jersey"}, records["md"] = { medinc:64596, statename:"Maryland"}, etc... for all 50 states. And then I have the array sorted reverse numerically (descending) like this: arr.sortOn("medinc", Array.NUMERIC); arr.reverse(); Can I call the name of the record (i.e. "nj" for new jersey) and then get the value from the numeric position above and below the record in the array? Basically, medinc is medium income of US states, and I am trying to show a ranking system... a user would click Texas for example, and it would show the medinc value for Texas, along with the state the ranks one position below and the state that ranks one position above in the array. Thanks for your help!

    Read the article

  • Association Mapping Details confusion?

    - by AaronLS
    I have never understood why the associations in EntityFramework look the way they do in the Mapping Details window. When I select the line between 2 tables for an association, for example FK_ApplicationSectionsNodes_FormItems, it shows this: Association Maps to ApplicationSectionNodes FormItems (key symbol) FormItemId:Int32 <--> FormItemId:int ApplicationSectionNodes (key symbol) NodeId:Int32 <--> (key symbol) NodeId : int Fortunately this one was create automatically for me based on the foreign key constraints in my database, but whenever no constraints exist, I have a hard to creating associations manually(when the database doesn't have a diagram setup) because I don't understand the mapping details for associations. FormItems table has a primary key identity column FormItemId, and ApplicationSectionNodes contains a FormItemId column that is the foreign key and has NodeId as a primary key identity column. What really makes no sense to me is why the association has anything listed about the NodeId, when NodeId doesn't have anything to do with the foreign key relationship? (It's even more confusing with self referencing relationships, but maybe if I could understand the above case I'd have a better handle). CREATE TABLE [dbo].[ApplicationSectionNodes]( [NodeID] [int] IDENTITY(1,1) NOT NULL, [OutlineText] [varchar](5000) NULL, [ParentNodeID] [int] NULL, [FormItemId] [int] NULL, CONSTRAINT [PK_ApplicationSectionNodes] PRIMARY KEY CLUSTERED ( [NodeID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY], CONSTRAINT [UQ_ApplicationSectionNodesFormItemId] UNIQUE NONCLUSTERED ( [FormItemId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[ApplicationSectionNodes] WITH NOCHECK ADD CONSTRAINT [FK_ApplicationSectionNodes_ApplicationSectionNodes] FOREIGN KEY([ParentNodeID]) REFERENCES [dbo].[ApplicationSectionNodes] ([NodeID]) GO ALTER TABLE [dbo].[ApplicationSectionNodes] NOCHECK CONSTRAINT [FK_ApplicationSectionNodes_ApplicationSectionNodes] GO ALTER TABLE [dbo].[ApplicationSectionNodes] WITH NOCHECK ADD CONSTRAINT [FK_ApplicationSectionNodes_FormItems] FOREIGN KEY([FormItemId]) REFERENCES [dbo].[FormItems] ([FormItemId]) GO ALTER TABLE [dbo].[ApplicationSectionNodes] NOCHECK CONSTRAINT [FK_ApplicationSectionNodes_FormItems] GO FormItems Table: CREATE TABLE [dbo].[FormItems]( [FormItemId] [int] IDENTITY(1,1) NOT NULL, [FormItemType] [int] NULL, CONSTRAINT [PK_FormItems] PRIMARY KEY CLUSTERED ( [FormItemId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[FormItems] WITH NOCHECK ADD CONSTRAINT [FK_FormItems_FormItemTypes] FOREIGN KEY([FormItemType]) REFERENCES [dbo].[FormItemTypes] ([FormItemTypeId]) GO ALTER TABLE [dbo].[FormItems] NOCHECK CONSTRAINT [FK_FormItems_FormItemTypes] GO

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >